Feb 18 11:49:24 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 11:49:24 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:24 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:25 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 11:49:25 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 11:49:26 crc kubenswrapper[4717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 11:49:26 crc kubenswrapper[4717]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 11:49:26 crc kubenswrapper[4717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 11:49:26 crc kubenswrapper[4717]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 11:49:26 crc kubenswrapper[4717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 11:49:26 crc kubenswrapper[4717]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.443438 4717 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448607 4717 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448633 4717 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448640 4717 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448645 4717 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448650 4717 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448655 4717 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448659 4717 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448666 4717 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448671 4717 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448676 4717 feature_gate.go:330] unrecognized feature gate: Example Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448681 4717 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448686 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448691 4717 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448696 4717 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448700 4717 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448704 4717 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448709 4717 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448716 4717 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448732 4717 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448740 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448747 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448755 4717 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448761 4717 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448766 4717 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448771 4717 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448775 4717 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448780 4717 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448784 4717 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448789 4717 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448794 4717 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448798 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448803 4717 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448807 4717 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448812 4717 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448817 4717 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448822 4717 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448827 4717 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448832 4717 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448837 4717 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448843 4717 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448849 4717 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448854 4717 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448859 4717 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448863 4717 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448868 4717 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448873 4717 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448877 4717 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448882 4717 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448887 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448893 4717 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448897 4717 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448902 4717 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448907 4717 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448911 4717 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448916 4717 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448920 4717 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448925 4717 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448929 4717 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448933 4717 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448940 4717 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448945 4717 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448950 4717 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448954 4717 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448960 4717 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448964 4717 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448969 4717 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448974 4717 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448980 4717 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448986 4717 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448990 4717 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.448996 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449099 4717 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449111 4717 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449121 4717 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449128 4717 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449135 4717 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449141 4717 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449148 4717 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449155 4717 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449160 4717 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449166 4717 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449173 4717 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449179 4717 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449184 4717 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449190 4717 flags.go:64] FLAG: --cgroup-root="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449195 4717 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449201 4717 flags.go:64] FLAG: --client-ca-file="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449206 4717 flags.go:64] FLAG: --cloud-config="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449211 4717 flags.go:64] FLAG: --cloud-provider="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449216 4717 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449223 4717 flags.go:64] FLAG: --cluster-domain="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449228 4717 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449234 4717 flags.go:64] FLAG: --config-dir="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449239 4717 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449244 4717 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449252 4717 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449274 4717 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449280 4717 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449285 4717 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449289 4717 flags.go:64] FLAG: --contention-profiling="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449294 4717 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449298 4717 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449303 4717 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449308 4717 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449317 4717 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449321 4717 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449326 4717 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449330 4717 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449335 4717 flags.go:64] FLAG: --enable-server="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449344 4717 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449356 4717 flags.go:64] FLAG: --event-burst="100" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449362 4717 flags.go:64] FLAG: --event-qps="50" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449367 4717 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449373 4717 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449378 4717 flags.go:64] FLAG: --eviction-hard="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449384 4717 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449388 4717 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449393 4717 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449399 4717 flags.go:64] FLAG: --eviction-soft="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449404 4717 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449408 4717 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449414 4717 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449419 4717 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449424 4717 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449430 4717 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449434 4717 flags.go:64] FLAG: --feature-gates="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449439 4717 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449443 4717 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449448 4717 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449452 4717 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449457 4717 flags.go:64] FLAG: --healthz-port="10248" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449461 4717 flags.go:64] FLAG: --help="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449465 4717 flags.go:64] FLAG: --hostname-override="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449469 4717 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449476 4717 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449480 4717 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449484 4717 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449488 4717 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449493 4717 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449497 4717 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449501 4717 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449505 4717 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449509 4717 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449514 4717 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449518 4717 flags.go:64] FLAG: --kube-reserved="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449522 4717 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449526 4717 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449530 4717 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449534 4717 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449538 4717 flags.go:64] FLAG: --lock-file="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449542 4717 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449547 4717 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449551 4717 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449557 4717 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449561 4717 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449565 4717 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449570 4717 flags.go:64] FLAG: --logging-format="text" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449575 4717 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449580 4717 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449584 4717 flags.go:64] FLAG: --manifest-url="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449588 4717 flags.go:64] FLAG: --manifest-url-header="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449593 4717 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449597 4717 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449603 4717 flags.go:64] FLAG: --max-pods="110" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449607 4717 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449611 4717 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449616 4717 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449620 4717 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449624 4717 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449629 4717 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449633 4717 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449643 4717 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449647 4717 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449651 4717 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449655 4717 flags.go:64] FLAG: --pod-cidr="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449659 4717 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449666 4717 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449670 4717 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449674 4717 flags.go:64] FLAG: --pods-per-core="0" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449678 4717 flags.go:64] FLAG: --port="10250" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449682 4717 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449686 4717 flags.go:64] FLAG: --provider-id="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449691 4717 flags.go:64] FLAG: --qos-reserved="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449695 4717 flags.go:64] FLAG: --read-only-port="10255" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449700 4717 flags.go:64] FLAG: --register-node="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449704 4717 flags.go:64] FLAG: --register-schedulable="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449708 4717 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449715 4717 flags.go:64] FLAG: --registry-burst="10" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449719 4717 flags.go:64] FLAG: --registry-qps="5" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449723 4717 flags.go:64] FLAG: --reserved-cpus="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449727 4717 flags.go:64] FLAG: --reserved-memory="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449732 4717 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449736 4717 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449741 4717 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449745 4717 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449749 4717 flags.go:64] FLAG: --runonce="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449753 4717 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449757 4717 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449761 4717 flags.go:64] FLAG: --seccomp-default="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449766 4717 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449770 4717 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449774 4717 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449779 4717 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449783 4717 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449787 4717 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449791 4717 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449795 4717 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449799 4717 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449803 4717 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449807 4717 flags.go:64] FLAG: --system-cgroups="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449812 4717 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449818 4717 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449822 4717 flags.go:64] FLAG: --tls-cert-file="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449827 4717 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449832 4717 flags.go:64] FLAG: --tls-min-version="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449836 4717 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449841 4717 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449846 4717 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449851 4717 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449856 4717 flags.go:64] FLAG: --v="2" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449862 4717 flags.go:64] FLAG: --version="false" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449869 4717 flags.go:64] FLAG: --vmodule="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449875 4717 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.449880 4717 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.449979 4717 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.449989 4717 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.449996 4717 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450001 4717 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450005 4717 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450011 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450016 4717 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450023 4717 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450027 4717 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450030 4717 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450034 4717 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450038 4717 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450045 4717 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450049 4717 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450053 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450057 4717 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450060 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450064 4717 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450067 4717 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450071 4717 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450074 4717 feature_gate.go:330] unrecognized feature gate: Example Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450078 4717 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450081 4717 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450085 4717 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450088 4717 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450092 4717 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450095 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450098 4717 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450102 4717 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450111 4717 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450114 4717 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450118 4717 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450121 4717 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450125 4717 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450128 4717 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450131 4717 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450136 4717 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450141 4717 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450146 4717 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450151 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450156 4717 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450160 4717 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450164 4717 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450168 4717 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450171 4717 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450175 4717 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450179 4717 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450183 4717 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450186 4717 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450190 4717 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450193 4717 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450196 4717 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450200 4717 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450203 4717 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450207 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450210 4717 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450213 4717 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450217 4717 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450220 4717 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450224 4717 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450227 4717 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450233 4717 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450236 4717 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450240 4717 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450243 4717 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450247 4717 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450251 4717 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450272 4717 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450276 4717 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450280 4717 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.450285 4717 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.450302 4717 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.502422 4717 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.502468 4717 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502580 4717 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502592 4717 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502596 4717 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502600 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502605 4717 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502608 4717 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502613 4717 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502616 4717 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502620 4717 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502623 4717 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502627 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502631 4717 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502635 4717 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502638 4717 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502642 4717 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502645 4717 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502649 4717 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502652 4717 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502656 4717 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502660 4717 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502664 4717 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502669 4717 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502677 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502682 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502687 4717 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502691 4717 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502695 4717 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502698 4717 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502702 4717 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502706 4717 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502711 4717 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502716 4717 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502720 4717 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502729 4717 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502734 4717 feature_gate.go:330] unrecognized feature gate: Example Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502739 4717 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502743 4717 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502747 4717 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502750 4717 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502753 4717 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502757 4717 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502762 4717 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502767 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502771 4717 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502776 4717 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502780 4717 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502784 4717 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502788 4717 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502791 4717 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502795 4717 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502798 4717 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502802 4717 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502805 4717 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502809 4717 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502812 4717 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502815 4717 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502819 4717 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502823 4717 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502826 4717 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502829 4717 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502834 4717 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502839 4717 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502842 4717 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502846 4717 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502849 4717 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502853 4717 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502857 4717 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502861 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502865 4717 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502869 4717 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.502873 4717 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.502880 4717 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503007 4717 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503015 4717 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503019 4717 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503024 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503027 4717 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503031 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503035 4717 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503039 4717 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503043 4717 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503046 4717 feature_gate.go:330] unrecognized feature gate: Example Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503050 4717 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503053 4717 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503057 4717 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503060 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503065 4717 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503069 4717 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503072 4717 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503076 4717 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503079 4717 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503083 4717 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503087 4717 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503093 4717 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503099 4717 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503103 4717 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503107 4717 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503111 4717 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503115 4717 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503118 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503122 4717 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503126 4717 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503129 4717 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503133 4717 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503136 4717 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503140 4717 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503143 4717 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503147 4717 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503151 4717 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503157 4717 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503161 4717 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503164 4717 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503167 4717 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503171 4717 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503174 4717 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503178 4717 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503182 4717 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503187 4717 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503191 4717 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503196 4717 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503199 4717 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503203 4717 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503206 4717 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503210 4717 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503215 4717 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503220 4717 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503224 4717 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503229 4717 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503233 4717 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503237 4717 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503242 4717 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503246 4717 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503249 4717 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503253 4717 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503274 4717 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503279 4717 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503287 4717 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503291 4717 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503295 4717 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503299 4717 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503303 4717 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503307 4717 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.503311 4717 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.503317 4717 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.503497 4717 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.528551 4717 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.528704 4717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.587971 4717 server.go:997] "Starting client certificate rotation" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.588060 4717 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.603970 4717 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-10 11:41:27.244278235 +0000 UTC Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.604113 4717 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.687445 4717 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.706009 4717 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 11:49:26 crc kubenswrapper[4717]: E0218 11:49:26.710915 4717 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.730662 4717 log.go:25] "Validated CRI v1 runtime API" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.823101 4717 log.go:25] "Validated CRI v1 image API" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.826840 4717 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.833647 4717 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-11-45-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.833703 4717 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.850361 4717 manager.go:217] Machine: {Timestamp:2026-02-18 11:49:26.847116923 +0000 UTC m=+1.249218269 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:956ecb2c-bb9d-4b7e-b56f-b439ce483321 BootID:1a04f158-1706-4e05-bc60-cb864ebb382f Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:47:70:14 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:47:70:14 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b6:99:8d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e8:04:f2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2a:ea:e4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2b:de:15 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7e:13:0f:15:30:da Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7a:7b:3d:0d:eb:52 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.850607 4717 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.850838 4717 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.852850 4717 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.853054 4717 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.853103 4717 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.853347 4717 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.853358 4717 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.853772 4717 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.854442 4717 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.855648 4717 state_mem.go:36] "Initialized new in-memory state store" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.855839 4717 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.862735 4717 kubelet.go:418] "Attempting to sync node with API server" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.862821 4717 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.862927 4717 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.862949 4717 kubelet.go:324] "Adding apiserver pod source" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.862972 4717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.867322 4717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:26 crc kubenswrapper[4717]: E0218 11:49:26.867435 4717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.868063 4717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:26 crc kubenswrapper[4717]: E0218 11:49:26.868162 4717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.871037 4717 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.872245 4717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.873999 4717 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880791 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880837 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880849 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880858 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880875 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880882 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880889 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880900 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880909 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880916 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880933 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.880940 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.890232 4717 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.891074 4717 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.891177 4717 server.go:1280] "Started kubelet" Feb 18 11:49:26 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.893126 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.893160 4717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.893230 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:52:10.822258677 +0000 UTC Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.893169 4717 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.893186 4717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.894137 4717 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.894212 4717 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.894218 4717 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.894410 4717 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 11:49:26 crc kubenswrapper[4717]: E0218 11:49:26.894641 4717 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 11:49:26 crc kubenswrapper[4717]: W0218 11:49:26.895225 4717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:26 crc kubenswrapper[4717]: E0218 11:49:26.895333 4717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.896175 4717 server.go:460] "Adding debug handlers to kubelet server" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.896182 4717 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.896334 4717 factory.go:55] Registering systemd factory Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.896364 4717 factory.go:221] Registration of the systemd container factory successfully Feb 18 11:49:26 crc kubenswrapper[4717]: E0218 11:49:26.900194 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.900594 4717 factory.go:153] Registering CRI-O factory Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.903784 4717 factory.go:221] Registration of the crio container factory successfully Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.903952 4717 factory.go:103] Registering Raw factory Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.904048 4717 manager.go:1196] Started watching for new ooms in manager Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.905120 4717 manager.go:319] Starting recovery of all containers Feb 18 11:49:26 crc kubenswrapper[4717]: E0218 11:49:26.905281 4717 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189554e40d3713e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 11:49:26.891148256 +0000 UTC m=+1.293249572,LastTimestamp:2026-02-18 11:49:26.891148256 +0000 UTC m=+1.293249572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916646 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916710 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916725 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916737 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916750 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916764 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916775 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916788 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916804 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916815 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916826 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916839 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916852 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916867 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916970 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916982 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.916992 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917004 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917016 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917029 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917040 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917051 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917096 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917111 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917123 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917134 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917148 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917161 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917173 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917187 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917199 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917211 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917229 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917272 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917287 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917299 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917312 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917324 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917336 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917348 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917362 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917393 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917407 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917420 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917433 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917446 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917458 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917469 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917481 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917494 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917508 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917524 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917541 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917554 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917567 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917582 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917597 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917610 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917621 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917635 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917646 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917661 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917674 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917686 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917698 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917709 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917720 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917738 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917752 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917765 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917779 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917791 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917803 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917814 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917825 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917839 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917852 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917864 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917876 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917888 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917900 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917913 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917924 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917935 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917946 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917958 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917970 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917981 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.917992 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918006 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918018 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918031 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918050 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918062 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918073 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918084 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918095 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918106 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918116 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918126 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918137 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918147 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918159 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918171 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918190 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918203 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918215 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918229 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918242 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918275 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918296 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918316 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918330 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918342 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918354 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918366 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918378 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918390 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918403 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918414 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918428 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918441 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918453 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918464 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918477 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918489 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918500 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918512 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918523 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918534 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918545 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918557 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918570 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918581 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918591 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918604 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918615 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918626 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918645 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918658 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918671 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918683 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918693 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918704 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918716 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918727 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918738 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918750 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918762 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918796 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918810 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918854 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918868 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918883 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918897 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918914 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918927 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918940 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918952 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918965 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918977 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.918991 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919003 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919017 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919036 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919050 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919063 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919076 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919090 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919103 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919116 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919133 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919146 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919159 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919172 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919184 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919197 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919210 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919222 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919236 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919250 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919283 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919299 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.919314 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921551 4717 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921579 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921597 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921613 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921635 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921649 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921663 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921676 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921690 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921702 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921715 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921732 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921746 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921762 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921778 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921793 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921806 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921820 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921834 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921853 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921869 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921882 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921896 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921909 4717 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921922 4717 reconstruct.go:97] "Volume reconstruction finished" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.921932 4717 reconciler.go:26] "Reconciler: start to sync state" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.931652 4717 manager.go:324] Recovery completed Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.942024 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.944052 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.944114 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.944124 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.945161 4717 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.945183 4717 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 11:49:26 crc kubenswrapper[4717]: I0218 11:49:26.945211 4717 state_mem.go:36] "Initialized new in-memory state store" Feb 18 11:49:26 crc kubenswrapper[4717]: E0218 11:49:26.994844 4717 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.032861 4717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.035149 4717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.035199 4717 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.035235 4717 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 11:49:27 crc kubenswrapper[4717]: E0218 11:49:27.035431 4717 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 11:49:27 crc kubenswrapper[4717]: W0218 11:49:27.036149 4717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:27 crc kubenswrapper[4717]: E0218 11:49:27.036243 4717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.061521 4717 policy_none.go:49] "None policy: Start" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.063069 4717 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.063100 4717 state_mem.go:35] "Initializing new in-memory state store" Feb 18 11:49:27 crc kubenswrapper[4717]: E0218 11:49:27.096212 4717 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 11:49:27 crc kubenswrapper[4717]: E0218 11:49:27.101327 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.128402 4717 manager.go:334] "Starting Device Plugin manager" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.128463 4717 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.128480 4717 server.go:79] "Starting device plugin registration server" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.129353 4717 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.129400 4717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.129764 4717 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.130217 4717 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.130237 4717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.136222 4717 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.136316 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.137767 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.140521 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.140540 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.140729 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.140958 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.141002 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.142313 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.143049 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.143062 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.144135 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.144157 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: E0218 11:49:27.143165 4717 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.144291 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.145644 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.145713 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.145885 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.146412 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.146452 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.146465 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.147610 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.147635 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.147645 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.147823 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.147883 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.147911 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.149436 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.149472 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.149475 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.149482 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.149497 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.149510 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.149666 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.149861 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.149915 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.150548 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.150577 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.150588 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.150732 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.150762 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.150793 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.150812 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.150821 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.151304 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.151320 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.151329 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.225881 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.225929 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.225958 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226754 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226799 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226817 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226834 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226851 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226867 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226894 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226908 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226921 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226939 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226953 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.226966 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.229859 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.230918 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.231020 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.231183 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.231290 4717 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:49:27 crc kubenswrapper[4717]: E0218 11:49:27.231824 4717 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.327879 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328231 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328399 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328452 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328510 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328487 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328716 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328745 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328764 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328797 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328814 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328479 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328894 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328881 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328926 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328829 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328975 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.328584 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329025 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329023 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329071 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329078 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329108 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329132 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329108 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329149 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329160 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329188 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329193 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.329206 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.432225 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.434448 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.434562 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.434628 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.434716 4717 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:49:27 crc kubenswrapper[4717]: E0218 11:49:27.436792 4717 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.480176 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: E0218 11:49:27.502319 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.503666 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.523775 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: W0218 11:49:27.523990 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-bd5d474ff67c004fa1120a96c92b35b3c24cb4432820720c5ad3757acec39def WatchSource:0}: Error finding container bd5d474ff67c004fa1120a96c92b35b3c24cb4432820720c5ad3757acec39def: Status 404 returned error can't find the container with id bd5d474ff67c004fa1120a96c92b35b3c24cb4432820720c5ad3757acec39def Feb 18 11:49:27 crc kubenswrapper[4717]: W0218 11:49:27.540210 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-448f09cc1c1b348a2050d9778094b847b4698f5fdf9bafdff8f7275099e0b14e WatchSource:0}: Error finding container 448f09cc1c1b348a2050d9778094b847b4698f5fdf9bafdff8f7275099e0b14e: Status 404 returned error can't find the container with id 448f09cc1c1b348a2050d9778094b847b4698f5fdf9bafdff8f7275099e0b14e Feb 18 11:49:27 crc kubenswrapper[4717]: W0218 11:49:27.542366 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9e950e9d2bf37ca304172a24187e213f9b90edd1d7c486b31d04f617d6af210c WatchSource:0}: Error finding container 9e950e9d2bf37ca304172a24187e213f9b90edd1d7c486b31d04f617d6af210c: Status 404 returned error can't find the container with id 9e950e9d2bf37ca304172a24187e213f9b90edd1d7c486b31d04f617d6af210c Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.544941 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.552210 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:27 crc kubenswrapper[4717]: W0218 11:49:27.564736 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6b580a467685c3a1f3b52b1be2c0b8c1467140b316b336e15f56f19232e17ec2 WatchSource:0}: Error finding container 6b580a467685c3a1f3b52b1be2c0b8c1467140b316b336e15f56f19232e17ec2: Status 404 returned error can't find the container with id 6b580a467685c3a1f3b52b1be2c0b8c1467140b316b336e15f56f19232e17ec2 Feb 18 11:49:27 crc kubenswrapper[4717]: W0218 11:49:27.576959 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-8775ab070a61b01f1a9e2eb5a8d59406d787ebb89b47358f1c9928db112a61df WatchSource:0}: Error finding container 8775ab070a61b01f1a9e2eb5a8d59406d787ebb89b47358f1c9928db112a61df: Status 404 returned error can't find the container with id 8775ab070a61b01f1a9e2eb5a8d59406d787ebb89b47358f1c9928db112a61df Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.837542 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.841700 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.841757 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.841771 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.841856 4717 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:49:27 crc kubenswrapper[4717]: E0218 11:49:27.842585 4717 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 18 11:49:27 crc kubenswrapper[4717]: W0218 11:49:27.885099 4717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:27 crc kubenswrapper[4717]: E0218 11:49:27.885231 4717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.892207 4717 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:27 crc kubenswrapper[4717]: I0218 11:49:27.894319 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:13:18.83392001 +0000 UTC Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.041418 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b580a467685c3a1f3b52b1be2c0b8c1467140b316b336e15f56f19232e17ec2"} Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.043468 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"448f09cc1c1b348a2050d9778094b847b4698f5fdf9bafdff8f7275099e0b14e"} Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.044526 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9e950e9d2bf37ca304172a24187e213f9b90edd1d7c486b31d04f617d6af210c"} Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.045664 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bd5d474ff67c004fa1120a96c92b35b3c24cb4432820720c5ad3757acec39def"} Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.046761 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8775ab070a61b01f1a9e2eb5a8d59406d787ebb89b47358f1c9928db112a61df"} Feb 18 11:49:28 crc kubenswrapper[4717]: W0218 11:49:28.218416 4717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:28 crc kubenswrapper[4717]: E0218 11:49:28.218901 4717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:28 crc kubenswrapper[4717]: W0218 11:49:28.243100 4717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:28 crc kubenswrapper[4717]: E0218 11:49:28.243317 4717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:28 crc kubenswrapper[4717]: E0218 11:49:28.303590 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Feb 18 11:49:28 crc kubenswrapper[4717]: W0218 11:49:28.403178 4717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:28 crc kubenswrapper[4717]: E0218 11:49:28.403284 4717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.643098 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.645477 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.646022 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.646071 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.646141 4717 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:49:28 crc kubenswrapper[4717]: E0218 11:49:28.646756 4717 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.809885 4717 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 11:49:28 crc kubenswrapper[4717]: E0218 11:49:28.811035 4717 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.892832 4717 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:28 crc kubenswrapper[4717]: I0218 11:49:28.894892 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:57:52.440643251 +0000 UTC Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.052454 4717 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="598bdc87459f43b34243a0e9d6ecd2e60de53b8b142a6ffa93096b514736f5ac" exitCode=0 Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.052535 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"598bdc87459f43b34243a0e9d6ecd2e60de53b8b142a6ffa93096b514736f5ac"} Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.052608 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.053778 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.053818 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.053832 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.055356 4717 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98" exitCode=0 Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.055576 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98"} Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.055620 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.056654 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.056686 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.056699 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.058306 4717 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b" exitCode=0 Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.058399 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b"} Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.058445 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.059575 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.059608 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.059624 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.067308 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd"} Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.067352 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.067373 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55"} Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.067390 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf"} Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.067402 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d"} Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.068373 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.068413 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.068426 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.072963 4717 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7" exitCode=0 Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.073015 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7"} Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.073191 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.079221 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.079309 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.079323 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.086404 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.087596 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.087626 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.087640 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.892954 4717 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:29 crc kubenswrapper[4717]: I0218 11:49:29.895082 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:59:13.446844821 +0000 UTC Feb 18 11:49:29 crc kubenswrapper[4717]: E0218 11:49:29.907082 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="3.2s" Feb 18 11:49:29 crc kubenswrapper[4717]: W0218 11:49:29.995857 4717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:29 crc kubenswrapper[4717]: E0218 11:49:29.995961 4717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:30 crc kubenswrapper[4717]: W0218 11:49:30.026248 4717 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Feb 18 11:49:30 crc kubenswrapper[4717]: E0218 11:49:30.026395 4717 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.078253 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17"} Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.078331 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.078338 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93"} Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.078355 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7"} Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.079451 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.079484 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.079497 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.081561 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b"} Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.081608 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65"} Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.081627 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5"} Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.081641 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430"} Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.083490 4717 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2f0cf6ea4909e64b0bded040d2949a469b5c2e17d01a5087c5e66f4301f353ff" exitCode=0 Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.083592 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2f0cf6ea4909e64b0bded040d2949a469b5c2e17d01a5087c5e66f4301f353ff"} Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.083639 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.084605 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.084643 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.084654 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.085335 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"39a14a2b95d0d214a2ddca91de7f1d1738e1b775b9c1d2372d683d77a91eb4e8"} Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.085373 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.085362 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.088388 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.088429 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.088445 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.092302 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.092356 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.092372 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.247893 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.249368 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.249489 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.249575 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.249661 4717 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:49:30 crc kubenswrapper[4717]: E0218 11:49:30.250292 4717 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Feb 18 11:49:30 crc kubenswrapper[4717]: I0218 11:49:30.895702 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:00:23.564442777 +0000 UTC Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.091930 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.091921 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4"} Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.093054 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.093151 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.093211 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.093841 4717 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9fa02f6578b7c706c5107415e36c2ba21826967190742b732b387c8853b10550" exitCode=0 Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.093912 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.093936 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.094014 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.094127 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.094030 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9fa02f6578b7c706c5107415e36c2ba21826967190742b732b387c8853b10550"} Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.094498 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.094515 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.094523 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.094810 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.094882 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.094907 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.095005 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.095026 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.094937 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:31 crc kubenswrapper[4717]: I0218 11:49:31.897122 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:06:03.428710111 +0000 UTC Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.100829 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2fd037ce86d41c95ec34b977239e3d0293bb808c3c58a76c9e857182f5690e46"} Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.100868 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.100875 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.100875 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5e3caff189b204e7e933545efd5a5acbb2fd97003a683bb46e570d8cf4b2ba11"} Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.100977 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e9cbf93f12390f9b3ccc4f8de94d2f42260dd9acd5ca40c8bd9ca970e51d9d2b"} Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.100987 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7518840ee42cef58d644b46ae1e1b80697b56f388a3527d3b121e9182e52603c"} Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.100999 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.101009 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aa91e3cfc80d3351055790e7820fb376618b3b0f33e23e56bc56bca79031cbc1"} Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.101766 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.101777 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.101792 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.101800 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.101783 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.101816 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.187809 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 11:49:32 crc kubenswrapper[4717]: I0218 11:49:32.897239 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 04:42:53.563348652 +0000 UTC Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.104328 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.104788 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.105286 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.105342 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.105357 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.106155 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.106296 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.106386 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.150782 4717 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.451032 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.452757 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.452825 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.452836 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.452869 4717 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.543122 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.543395 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.564022 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.564336 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.564423 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.898413 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 21:32:05.305649398 +0000 UTC Feb 18 11:49:33 crc kubenswrapper[4717]: I0218 11:49:33.901818 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.107713 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.107771 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.109309 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.109376 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.109394 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.109509 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.109540 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.109552 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.326200 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.664468 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.664779 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.666392 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.666466 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.666481 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.899009 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:18:12.803637238 +0000 UTC Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.959490 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.959721 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.961404 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.961543 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:34 crc kubenswrapper[4717]: I0218 11:49:34.961654 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.110584 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.111632 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.111689 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.111701 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.287469 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.287799 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.289492 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.289557 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.289578 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.826511 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.826748 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.829058 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.829136 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.829156 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:35 crc kubenswrapper[4717]: I0218 11:49:35.899377 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:58:23.212740903 +0000 UTC Feb 18 11:49:36 crc kubenswrapper[4717]: I0218 11:49:36.899842 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:57:27.322257747 +0000 UTC Feb 18 11:49:37 crc kubenswrapper[4717]: E0218 11:49:37.145019 4717 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 11:49:37 crc kubenswrapper[4717]: I0218 11:49:37.900956 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:23:10.389145605 +0000 UTC Feb 18 11:49:37 crc kubenswrapper[4717]: I0218 11:49:37.959888 4717 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:49:37 crc kubenswrapper[4717]: I0218 11:49:37.959986 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:49:38 crc kubenswrapper[4717]: I0218 11:49:38.901848 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:07:11.576146587 +0000 UTC Feb 18 11:49:38 crc kubenswrapper[4717]: I0218 11:49:38.909074 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:38 crc kubenswrapper[4717]: I0218 11:49:38.909313 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:38 crc kubenswrapper[4717]: I0218 11:49:38.910528 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:38 crc kubenswrapper[4717]: I0218 11:49:38.910553 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:38 crc kubenswrapper[4717]: I0218 11:49:38.910563 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:38 crc kubenswrapper[4717]: I0218 11:49:38.914888 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:39 crc kubenswrapper[4717]: I0218 11:49:39.122944 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:39 crc kubenswrapper[4717]: I0218 11:49:39.124520 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:39 crc kubenswrapper[4717]: I0218 11:49:39.124563 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:39 crc kubenswrapper[4717]: I0218 11:49:39.124579 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:39 crc kubenswrapper[4717]: I0218 11:49:39.129302 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:39 crc kubenswrapper[4717]: I0218 11:49:39.902298 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 15:54:26.766489691 +0000 UTC Feb 18 11:49:40 crc kubenswrapper[4717]: I0218 11:49:40.124799 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:40 crc kubenswrapper[4717]: I0218 11:49:40.125924 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:40 crc kubenswrapper[4717]: I0218 11:49:40.125963 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:40 crc kubenswrapper[4717]: I0218 11:49:40.125975 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:40 crc kubenswrapper[4717]: I0218 11:49:40.892788 4717 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 18 11:49:40 crc kubenswrapper[4717]: I0218 11:49:40.903057 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:25:16.065164161 +0000 UTC Feb 18 11:49:40 crc kubenswrapper[4717]: I0218 11:49:40.921836 4717 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 11:49:40 crc kubenswrapper[4717]: I0218 11:49:40.921891 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 11:49:40 crc kubenswrapper[4717]: I0218 11:49:40.931301 4717 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 11:49:40 crc kubenswrapper[4717]: I0218 11:49:40.931347 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 11:49:41 crc kubenswrapper[4717]: I0218 11:49:41.904207 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:59:29.533309027 +0000 UTC Feb 18 11:49:42 crc kubenswrapper[4717]: I0218 11:49:42.230167 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 11:49:42 crc kubenswrapper[4717]: I0218 11:49:42.230591 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:42 crc kubenswrapper[4717]: I0218 11:49:42.231519 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:42 crc kubenswrapper[4717]: I0218 11:49:42.231552 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:42 crc kubenswrapper[4717]: I0218 11:49:42.231562 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:42 crc kubenswrapper[4717]: I0218 11:49:42.240543 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 11:49:42 crc kubenswrapper[4717]: I0218 11:49:42.905436 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:58:22.618516492 +0000 UTC Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.130848 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.131831 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.131992 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.132096 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.906737 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:48:40.98100052 +0000 UTC Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.908138 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.908325 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.914291 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.914741 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.914802 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:43 crc kubenswrapper[4717]: I0218 11:49:43.914815 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:44 crc kubenswrapper[4717]: I0218 11:49:44.132664 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:44 crc kubenswrapper[4717]: I0218 11:49:44.133667 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:44 crc kubenswrapper[4717]: I0218 11:49:44.133703 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:44 crc kubenswrapper[4717]: I0218 11:49:44.133714 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:44 crc kubenswrapper[4717]: I0218 11:49:44.906964 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:07:25.486083655 +0000 UTC Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.907375 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:31:17.588728608 +0000 UTC Feb 18 11:49:45 crc kubenswrapper[4717]: E0218 11:49:45.916751 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.918333 4717 trace.go:236] Trace[574943774]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 11:49:31.087) (total time: 14830ms): Feb 18 11:49:45 crc kubenswrapper[4717]: Trace[574943774]: ---"Objects listed" error: 14830ms (11:49:45.918) Feb 18 11:49:45 crc kubenswrapper[4717]: Trace[574943774]: [14.830572379s] [14.830572379s] END Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.918361 4717 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.918709 4717 trace.go:236] Trace[1469797092]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 11:49:31.062) (total time: 14856ms): Feb 18 11:49:45 crc kubenswrapper[4717]: Trace[1469797092]: ---"Objects listed" error: 14855ms (11:49:45.918) Feb 18 11:49:45 crc kubenswrapper[4717]: Trace[1469797092]: [14.856026914s] [14.856026914s] END Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.918727 4717 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.919918 4717 trace.go:236] Trace[224335199]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 11:49:33.901) (total time: 12018ms): Feb 18 11:49:45 crc kubenswrapper[4717]: Trace[224335199]: ---"Objects listed" error: 12018ms (11:49:45.919) Feb 18 11:49:45 crc kubenswrapper[4717]: Trace[224335199]: [12.018190257s] [12.018190257s] END Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.920006 4717 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.920458 4717 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.920485 4717 trace.go:236] Trace[1701014605]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 11:49:35.641) (total time: 10279ms): Feb 18 11:49:45 crc kubenswrapper[4717]: Trace[1701014605]: ---"Objects listed" error: 10278ms (11:49:45.920) Feb 18 11:49:45 crc kubenswrapper[4717]: Trace[1701014605]: [10.279017304s] [10.279017304s] END Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.920526 4717 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 11:49:45 crc kubenswrapper[4717]: E0218 11:49:45.925076 4717 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.927098 4717 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.947027 4717 csr.go:261] certificate signing request csr-tpnd8 is approved, waiting to be issued Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.958723 4717 csr.go:257] certificate signing request csr-tpnd8 is issued Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.968512 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.975202 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.976789 4717 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.976851 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.977547 4717 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33298->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.977574 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33298->192.168.126.11:17697: read: connection reset by peer" Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.978779 4717 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 18 11:49:45 crc kubenswrapper[4717]: I0218 11:49:45.978808 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.137951 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.139794 4717 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4" exitCode=255 Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.139883 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4"} Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.173540 4717 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.272281 4717 scope.go:117] "RemoveContainer" containerID="af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.530342 4717 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 11:49:46 crc kubenswrapper[4717]: W0218 11:49:46.530894 4717 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 11:49:46 crc kubenswrapper[4717]: W0218 11:49:46.530937 4717 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 11:49:46 crc kubenswrapper[4717]: W0218 11:49:46.530942 4717 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 11:49:46 crc kubenswrapper[4717]: W0218 11:49:46.531010 4717 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.874010 4717 apiserver.go:52] "Watching apiserver" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.880038 4717 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.881051 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xmb4p","openshift-machine-config-operator/machine-config-daemon-5wbk5","openshift-multus/multus-additional-cni-plugins-s242q","openshift-multus/multus-hvktx","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-2fh5s","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.881565 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.883057 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.883109 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.883209 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.883329 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.883433 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.883534 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.883735 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.883769 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xmb4p" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.883901 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.883906 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.884326 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.889097 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.889168 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.889386 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.889517 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.889564 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.889840 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.889917 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.894317 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.894353 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.894359 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.894373 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.895523 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.895597 4717 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.895712 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.896607 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.896820 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.896982 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.898015 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.898164 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.898279 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.898426 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.898597 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.898600 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.898561 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.899035 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.899080 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.899116 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.899121 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.899146 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.899043 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.899424 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.899956 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.900118 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.906057 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.907281 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.907454 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:31:18.675155935 +0000 UTC Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.922771 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925029 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925080 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925105 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925127 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925151 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925173 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925192 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925211 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925242 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925287 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925309 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925331 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925352 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925373 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925394 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925424 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925450 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925477 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925501 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925524 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925545 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925566 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925588 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925609 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925629 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925650 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925672 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925696 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925716 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925736 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925728 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925756 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925780 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925830 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925853 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925876 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925900 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925920 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925939 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.925991 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926013 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926032 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926035 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926046 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926148 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926145 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926212 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926234 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926216 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926279 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926303 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926304 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926324 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926348 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926378 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926400 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926405 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926415 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926427 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926451 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926471 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926472 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926502 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926527 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926548 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926572 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926592 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926613 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926613 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926621 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926637 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926639 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926660 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926683 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926705 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926727 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926748 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926768 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926793 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926813 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926819 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926833 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926845 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926856 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926860 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926881 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926910 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926933 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926952 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926970 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.926979 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927006 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927046 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927078 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927104 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927129 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927152 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927177 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927199 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927211 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927205 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927280 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927310 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927332 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927358 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927388 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927413 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927436 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927457 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927460 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927478 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927499 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927523 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927545 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927568 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927592 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927599 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927613 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927632 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927651 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927669 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927690 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927688 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927699 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927751 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927973 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928024 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928066 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.927710 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928122 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928130 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928159 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928182 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928207 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928233 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928280 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928306 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928335 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928358 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928383 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928409 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928432 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928452 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928470 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928486 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928501 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928516 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928532 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928547 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928573 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928587 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928603 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928619 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928634 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928650 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928666 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928681 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928697 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928713 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928730 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928747 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928762 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928783 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928801 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928820 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928836 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928853 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928870 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928886 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928902 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928919 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928935 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928949 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928967 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928983 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928998 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929014 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929032 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929047 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929064 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929079 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929096 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929113 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929129 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929145 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929162 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929178 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929194 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929212 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929229 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929245 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929638 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930738 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930816 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930836 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930851 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930867 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930883 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930900 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930920 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930938 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930953 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930996 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931012 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931027 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931045 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931061 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931077 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931092 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931112 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931128 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931145 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931161 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931176 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931192 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931207 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931233 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931250 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931281 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931297 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931315 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931332 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931349 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931410 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/41f72a5f-4820-4dc2-a6c5-243550881aaf-cni-binary-copy\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931432 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-var-lib-openvswitch\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931451 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/823580ef-975b-4298-955b-fb3c0b5fefc3-rootfs\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931475 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931492 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-env-overrides\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931508 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-systemd-units\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.931523 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-node-log\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932251 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932686 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932704 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932726 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932742 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pmp8\" (UniqueName: \"kubernetes.io/projected/ed61105a-bc90-46a4-991f-466e6836d94d-kube-api-access-9pmp8\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932759 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-socket-dir-parent\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932777 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932796 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-cni-dir\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932812 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7rtd\" (UniqueName: \"kubernetes.io/projected/41f72a5f-4820-4dc2-a6c5-243550881aaf-kube-api-access-z7rtd\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932827 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t997l\" (UniqueName: \"kubernetes.io/projected/823580ef-975b-4298-955b-fb3c0b5fefc3-kube-api-access-t997l\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932846 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932860 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-conf-dir\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932876 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932891 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-ovn\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932905 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-run-k8s-cni-cncf-io\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932920 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-systemd\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932935 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-script-lib\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932949 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-os-release\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.932967 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933007 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-run-netns\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933028 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-etc-kubernetes\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933050 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ed61105a-bc90-46a4-991f-466e6836d94d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933079 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-netns\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933109 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d2545cd7-d1a5-4248-a0e1-eb6f07f0023e-hosts-file\") pod \"node-resolver-xmb4p\" (UID: \"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\") " pod="openshift-dns/node-resolver-xmb4p" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933129 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ed61105a-bc90-46a4-991f-466e6836d94d-cni-binary-copy\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933148 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-kubelet\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933166 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-slash\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933184 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-netd\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933204 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933343 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-cnibin\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933364 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-var-lib-kubelet\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933379 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/823580ef-975b-4298-955b-fb3c0b5fefc3-proxy-tls\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933402 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933422 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk2ht\" (UniqueName: \"kubernetes.io/projected/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-kube-api-access-xk2ht\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933438 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933455 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-openvswitch\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933471 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-bin\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933485 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-os-release\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933514 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-var-lib-cni-multus\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933542 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-daemon-config\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933558 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-run-multus-certs\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933573 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/823580ef-975b-4298-955b-fb3c0b5fefc3-mcd-auth-proxy-config\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933590 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933606 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbz4m\" (UniqueName: \"kubernetes.io/projected/d2545cd7-d1a5-4248-a0e1-eb6f07f0023e-kube-api-access-vbz4m\") pod \"node-resolver-xmb4p\" (UID: \"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\") " pod="openshift-dns/node-resolver-xmb4p" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933626 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933641 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-cnibin\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933656 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-log-socket\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933671 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-system-cni-dir\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933694 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-var-lib-cni-bin\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933708 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-hostroot\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933723 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-system-cni-dir\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933741 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933757 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-etc-openvswitch\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933772 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933792 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933807 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-config\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933822 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovn-node-metrics-cert\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933874 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933896 4717 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933906 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933916 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933926 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933936 4717 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933947 4717 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933956 4717 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933966 4717 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933977 4717 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933986 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933996 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934006 4717 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934016 4717 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934027 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934038 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934047 4717 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934057 4717 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934067 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934077 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934087 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934097 4717 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934107 4717 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934117 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934127 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934138 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934147 4717 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934157 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928178 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937473 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928196 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.946185 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928357 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928390 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928447 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928538 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928702 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.928949 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929131 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929153 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929200 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929376 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929470 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929699 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.929759 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930040 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930047 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930237 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930524 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930534 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.930628 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933271 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.933294 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934135 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934236 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934287 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934482 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934696 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934756 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934776 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934784 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.934906 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935038 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935168 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935182 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935270 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935359 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.935389 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:49:47.435369677 +0000 UTC m=+21.837470993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935388 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935610 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935692 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935741 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935746 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935853 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935972 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.935919 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.936302 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.936735 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.936535 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.936838 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.936840 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.936987 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937052 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937071 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937079 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937104 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937429 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937452 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937455 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937534 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937549 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937686 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937760 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.937815 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.938010 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.938015 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.938061 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.938243 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.938641 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.944532 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.944665 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.946547 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.944825 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.945202 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.945020 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.945241 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.945385 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.945527 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.945716 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.945942 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.945970 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.946633 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.946068 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.946220 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.946382 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.945307 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.946760 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.946851 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.946951 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.947307 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.947535 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.947816 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.947987 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.948402 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.948507 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.948836 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.949095 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.949144 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.949378 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.949447 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.938742 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.949608 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.950456 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.950578 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.950845 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.950873 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.951224 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.951322 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.951480 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.951409 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.951582 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.951609 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.951847 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.952014 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.952097 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.952577 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.952834 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.953021 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.953048 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.953374 4717 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.954164 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.954172 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.954194 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.954598 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.955514 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.955607 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.955934 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.956401 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.956844 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.957934 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.958273 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.958530 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.958814 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.962558 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.962825 4717 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 11:44:45 +0000 UTC, rotation deadline is 2027-01-10 09:11:03.613151672 +0000 UTC Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.962897 4717 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7821h21m16.650260082s for next certificate rotation Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.963408 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.963582 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.963950 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.964125 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.964362 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.966755 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.969586 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.970416 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.970473 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.971924 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.971944 4717 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.970671 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.970878 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.971479 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.970755 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.970813 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.972558 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.972572 4717 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.971759 4717 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.973723 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.974225 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.974371 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.974422 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.974599 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.974619 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.975725 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.970932 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.971579 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.971664 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.971803 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.980834 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.981235 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:47.481202214 +0000 UTC m=+21.883303540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.981367 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:47.481354548 +0000 UTC m=+21.883455864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.981401 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:47.481388229 +0000 UTC m=+21.883489545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.986685 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.988076 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.988448 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.988821 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.990068 4717 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:49:46 crc kubenswrapper[4717]: E0218 11:49:46.990177 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:47.490147506 +0000 UTC m=+21.892248822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.993007 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.993171 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.993399 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:46 crc kubenswrapper[4717]: I0218 11:49:46.994029 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:46.995180 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:46.995876 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.000947 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.003630 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.005198 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.006781 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.014805 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.018410 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.021066 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.027430 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.027459 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034661 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-socket-dir-parent\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034707 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pmp8\" (UniqueName: \"kubernetes.io/projected/ed61105a-bc90-46a4-991f-466e6836d94d-kube-api-access-9pmp8\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034735 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t997l\" (UniqueName: \"kubernetes.io/projected/823580ef-975b-4298-955b-fb3c0b5fefc3-kube-api-access-t997l\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034786 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-cni-dir\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034812 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7rtd\" (UniqueName: \"kubernetes.io/projected/41f72a5f-4820-4dc2-a6c5-243550881aaf-kube-api-access-z7rtd\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034829 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-ovn\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034845 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-run-k8s-cni-cncf-io\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034891 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-conf-dir\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034909 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034923 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-systemd\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034939 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-script-lib\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034956 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-os-release\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034971 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ed61105a-bc90-46a4-991f-466e6836d94d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.034987 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-netns\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035011 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-run-netns\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035025 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-etc-kubernetes\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035039 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-kubelet\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035054 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-slash\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035068 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-netd\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035067 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-cni-dir\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035083 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d2545cd7-d1a5-4248-a0e1-eb6f07f0023e-hosts-file\") pod \"node-resolver-xmb4p\" (UID: \"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\") " pod="openshift-dns/node-resolver-xmb4p" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035098 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ed61105a-bc90-46a4-991f-466e6836d94d-cni-binary-copy\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035113 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/823580ef-975b-4298-955b-fb3c0b5fefc3-proxy-tls\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035129 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035144 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-cnibin\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035159 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-var-lib-kubelet\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035173 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-openvswitch\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035175 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-socket-dir-parent\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035201 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-bin\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035216 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk2ht\" (UniqueName: \"kubernetes.io/projected/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-kube-api-access-xk2ht\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035236 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-run-multus-certs\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035278 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/823580ef-975b-4298-955b-fb3c0b5fefc3-mcd-auth-proxy-config\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035298 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035313 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbz4m\" (UniqueName: \"kubernetes.io/projected/d2545cd7-d1a5-4248-a0e1-eb6f07f0023e-kube-api-access-vbz4m\") pod \"node-resolver-xmb4p\" (UID: \"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\") " pod="openshift-dns/node-resolver-xmb4p" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035327 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-os-release\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035341 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-var-lib-cni-multus\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035355 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-daemon-config\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035374 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-cnibin\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035379 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-netd\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035387 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-hostroot\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035441 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-system-cni-dir\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035464 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-etc-openvswitch\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035482 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-log-socket\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035495 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-system-cni-dir\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035509 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-var-lib-cni-bin\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035523 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-config\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035537 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovn-node-metrics-cert\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035550 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035564 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/41f72a5f-4820-4dc2-a6c5-243550881aaf-cni-binary-copy\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035580 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-var-lib-openvswitch\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035595 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/823580ef-975b-4298-955b-fb3c0b5fefc3-rootfs\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035617 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-env-overrides\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035638 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035653 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-systemd-units\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035667 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-node-log\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035673 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-os-release\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035727 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-conf-dir\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035744 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d2545cd7-d1a5-4248-a0e1-eb6f07f0023e-hosts-file\") pod \"node-resolver-xmb4p\" (UID: \"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\") " pod="openshift-dns/node-resolver-xmb4p" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035761 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035781 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035782 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035805 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-cnibin\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035816 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-systemd\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035831 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-var-lib-kubelet\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035858 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-openvswitch\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035876 4717 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.035900 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-bin\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036060 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-run-multus-certs\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036189 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ed61105a-bc90-46a4-991f-466e6836d94d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036445 4717 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036474 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-netns\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036474 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-script-lib\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036492 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-system-cni-dir\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036507 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-etc-openvswitch\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036528 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-log-socket\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036564 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-system-cni-dir\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036583 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/823580ef-975b-4298-955b-fb3c0b5fefc3-rootfs\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036718 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/823580ef-975b-4298-955b-fb3c0b5fefc3-mcd-auth-proxy-config\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036750 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-systemd-units\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036847 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-run-netns\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036870 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-etc-kubernetes\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036889 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-kubelet\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.036908 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-slash\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.037138 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-config\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.037656 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-env-overrides\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.038297 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-var-lib-openvswitch\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.038341 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.038365 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-os-release\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.038374 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-ovn\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.038394 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-node-log\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.038416 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-run-k8s-cni-cncf-io\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.038444 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.038463 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-var-lib-cni-bin\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.038826 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ed61105a-bc90-46a4-991f-466e6836d94d-cni-binary-copy\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039042 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-host-var-lib-cni-multus\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039071 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ed61105a-bc90-46a4-991f-466e6836d94d-cnibin\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039094 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/41f72a5f-4820-4dc2-a6c5-243550881aaf-hostroot\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039131 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039151 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039164 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039173 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039183 4717 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039196 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039325 4717 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039341 4717 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039355 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039366 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039379 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039390 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039400 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039412 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039424 4717 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039928 4717 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040000 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040014 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040026 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040066 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040078 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040168 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040206 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040293 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040307 4717 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040319 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040331 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040347 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040359 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040372 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040385 4717 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040399 4717 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040411 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040421 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040430 4717 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040438 4717 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040446 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040455 4717 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040465 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040474 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040482 4717 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040492 4717 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040501 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040512 4717 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040521 4717 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040531 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040539 4717 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040547 4717 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040556 4717 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040565 4717 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040574 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040583 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040592 4717 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040602 4717 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040613 4717 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040623 4717 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040632 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040641 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040650 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040658 4717 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040666 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040675 4717 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040684 4717 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040693 4717 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040704 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040717 4717 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040730 4717 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040740 4717 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040749 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040759 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040767 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040777 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040787 4717 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040800 4717 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040811 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040822 4717 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040833 4717 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040844 4717 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040855 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040865 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040874 4717 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040883 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040891 4717 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040900 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040909 4717 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040917 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040926 4717 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040935 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040943 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040951 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040959 4717 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040968 4717 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040976 4717 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040986 4717 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.040995 4717 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041003 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041011 4717 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041019 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041028 4717 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041036 4717 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041045 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041054 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041064 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041071 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041079 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041088 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041096 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041097 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/41f72a5f-4820-4dc2-a6c5-243550881aaf-multus-daemon-config\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041105 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041147 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041162 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041175 4717 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041188 4717 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041201 4717 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041213 4717 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041226 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041239 4717 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041253 4717 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041298 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041309 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041321 4717 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041333 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041346 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041360 4717 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041374 4717 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041386 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041398 4717 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041410 4717 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041424 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041436 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041447 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041460 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041472 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041484 4717 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041496 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041508 4717 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041520 4717 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041532 4717 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041546 4717 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041558 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041570 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041582 4717 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041594 4717 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041606 4717 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041618 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041630 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041642 4717 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041654 4717 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041676 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041690 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041704 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041716 4717 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041729 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041741 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041752 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041764 4717 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041775 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041785 4717 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041797 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041809 4717 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041822 4717 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041836 4717 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041847 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041858 4717 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.041869 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.039923 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/41f72a5f-4820-4dc2-a6c5-243550881aaf-cni-binary-copy\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.042238 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovn-node-metrics-cert\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.049857 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/823580ef-975b-4298-955b-fb3c0b5fefc3-proxy-tls\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.052617 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.053496 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.055283 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.056509 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t997l\" (UniqueName: \"kubernetes.io/projected/823580ef-975b-4298-955b-fb3c0b5fefc3-kube-api-access-t997l\") pod \"machine-config-daemon-5wbk5\" (UID: \"823580ef-975b-4298-955b-fb3c0b5fefc3\") " pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.057938 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7rtd\" (UniqueName: \"kubernetes.io/projected/41f72a5f-4820-4dc2-a6c5-243550881aaf-kube-api-access-z7rtd\") pod \"multus-hvktx\" (UID: \"41f72a5f-4820-4dc2-a6c5-243550881aaf\") " pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.058908 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.059844 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.061514 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.061614 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pmp8\" (UniqueName: \"kubernetes.io/projected/ed61105a-bc90-46a4-991f-466e6836d94d-kube-api-access-9pmp8\") pod \"multus-additional-cni-plugins-s242q\" (UID: \"ed61105a-bc90-46a4-991f-466e6836d94d\") " pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.061740 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.063071 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbz4m\" (UniqueName: \"kubernetes.io/projected/d2545cd7-d1a5-4248-a0e1-eb6f07f0023e-kube-api-access-vbz4m\") pod \"node-resolver-xmb4p\" (UID: \"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\") " pod="openshift-dns/node-resolver-xmb4p" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.065627 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk2ht\" (UniqueName: \"kubernetes.io/projected/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-kube-api-access-xk2ht\") pod \"ovnkube-node-2fh5s\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.065999 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.066873 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.067988 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.068644 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.069694 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.071301 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.072303 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.073488 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.074149 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.075375 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xmb4p" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.075897 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.075970 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.076724 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.084666 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.087206 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.088152 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.090207 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.095599 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.096150 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.097343 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.098099 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.098955 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.099487 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.100749 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.102663 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.103826 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.104318 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.105349 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.105775 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.118892 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.119830 4717 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.119968 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.124127 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.127124 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.127645 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.133240 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.135093 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.139371 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.141399 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.142333 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.155956 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.156696 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.157971 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.161268 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.162024 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.165729 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.166900 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.167419 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.169496 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.169961 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.173065 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.174304 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.175802 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.175911 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.176802 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.178089 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.178650 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xmb4p" event={"ID":"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e","Type":"ContainerStarted","Data":"98e2305a608df3c497cbb32242fbaa80fc4d7afd36db23bdfe0ab1c3f7afd00c"} Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.178679 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f9bb1018118f7987fcfe40fdada581109c40832c43a5dd33dfaaafb047f6033f"} Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.178691 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"40d8875addff1c9b2083055a5125247a2bf6b28e285c198d760a913c1148c26f"} Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.187558 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa"} Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.188322 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.188691 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.199845 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.204740 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.210336 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.216797 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-s242q" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.224220 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.234489 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.235346 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hvktx" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.247649 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: W0218 11:49:47.258109 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41f72a5f_4820_4dc2_a6c5_243550881aaf.slice/crio-4a197de405508e2e791489d99e064e2d1cb8d61769ee15af292420d62eb445b9 WatchSource:0}: Error finding container 4a197de405508e2e791489d99e064e2d1cb8d61769ee15af292420d62eb445b9: Status 404 returned error can't find the container with id 4a197de405508e2e791489d99e064e2d1cb8d61769ee15af292420d62eb445b9 Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.259800 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.270248 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.282113 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.316658 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.316900 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.334550 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.343741 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.351122 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.360485 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.377071 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.389628 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: W0218 11:49:47.393377 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod823580ef_975b_4298_955b_fb3c0b5fefc3.slice/crio-21e8c14db0fb3844c0b487885c86b26e7d6b7aebce9f9af0875eb5320209eb45 WatchSource:0}: Error finding container 21e8c14db0fb3844c0b487885c86b26e7d6b7aebce9f9af0875eb5320209eb45: Status 404 returned error can't find the container with id 21e8c14db0fb3844c0b487885c86b26e7d6b7aebce9f9af0875eb5320209eb45 Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.402591 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.421417 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.430649 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.442416 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.444393 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.444618 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:49:48.444601344 +0000 UTC m=+22.846702660 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.452617 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.464922 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.477409 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.489592 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.502507 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.515868 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.524669 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.533516 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.544982 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.545125 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.545159 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.545181 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.545197 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545360 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545382 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545392 4717 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545450 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545460 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:48.545438127 +0000 UTC m=+22.947539483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545467 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545505 4717 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545561 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:48.54554407 +0000 UTC m=+22.947645436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545563 4717 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545395 4717 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545608 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:48.545597101 +0000 UTC m=+22.947698417 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:49:47 crc kubenswrapper[4717]: E0218 11:49:47.545642 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:48.545616982 +0000 UTC m=+22.947718298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.563402 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.575062 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.584580 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 11:49:47 crc kubenswrapper[4717]: I0218 11:49:47.908525 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:14:06.956058802 +0000 UTC Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.035463 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.035579 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.035463 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.035693 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.186970 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.187024 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.187037 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"21e8c14db0fb3844c0b487885c86b26e7d6b7aebce9f9af0875eb5320209eb45"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.188466 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xmb4p" event={"ID":"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e","Type":"ContainerStarted","Data":"f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.190481 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hvktx" event={"ID":"41f72a5f-4820-4dc2-a6c5-243550881aaf","Type":"ContainerStarted","Data":"74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.190535 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hvktx" event={"ID":"41f72a5f-4820-4dc2-a6c5-243550881aaf","Type":"ContainerStarted","Data":"4a197de405508e2e791489d99e064e2d1cb8d61769ee15af292420d62eb445b9"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.192334 4717 generic.go:334] "Generic (PLEG): container finished" podID="ed61105a-bc90-46a4-991f-466e6836d94d" containerID="85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa" exitCode=0 Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.192408 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" event={"ID":"ed61105a-bc90-46a4-991f-466e6836d94d","Type":"ContainerDied","Data":"85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.192433 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" event={"ID":"ed61105a-bc90-46a4-991f-466e6836d94d","Type":"ContainerStarted","Data":"f61d68e0d451d5894d6f40e0c8a536ca92486009c572cd3d1dfe29d75a9fd507"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.194312 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.194364 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"88f1e40f2e21be179cafb704eb5fecf1479ab293471e440804d61fd11ec81d3f"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.196173 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519" exitCode=0 Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.196250 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.196325 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"e0a81d52d8278381e5d0a4615c75f67ba9dd8e9572b004ab955e1ff5efcb91e3"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.199692 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.199739 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae"} Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.205514 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.220874 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.232090 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.246660 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.265851 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.278719 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.292872 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.311103 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.322186 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.332958 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.350558 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.363482 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.378811 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.395434 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.413342 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.426575 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.438272 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.452575 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.454098 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.454326 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:49:50.454305792 +0000 UTC m=+24.856407108 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.466927 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.480653 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.494112 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.507031 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.542793 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.555644 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.555705 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.555748 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.555772 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.555874 4717 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.555878 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.555925 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.555944 4717 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.555986 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.556008 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.556021 4717 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.555928 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:50.555912837 +0000 UTC m=+24.958014153 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.556054 4717 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.556074 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:50.556055101 +0000 UTC m=+24.958156497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.556175 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:50.556151504 +0000 UTC m=+24.958252890 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:48 crc kubenswrapper[4717]: E0218 11:49:48.556190 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:50.556182754 +0000 UTC m=+24.958284160 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.562820 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.583196 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.596542 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.782627 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-4dzfm"] Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.783035 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.785626 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.786059 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.786423 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.786422 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.799641 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.811373 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.822486 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.835846 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.855705 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.858778 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/eba9a5cc-35c1-47ea-b225-1b57b40a5e0d-serviceca\") pod \"node-ca-4dzfm\" (UID: \"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\") " pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.858833 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eba9a5cc-35c1-47ea-b225-1b57b40a5e0d-host\") pod \"node-ca-4dzfm\" (UID: \"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\") " pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.858851 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj7cq\" (UniqueName: \"kubernetes.io/projected/eba9a5cc-35c1-47ea-b225-1b57b40a5e0d-kube-api-access-fj7cq\") pod \"node-ca-4dzfm\" (UID: \"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\") " pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.871792 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.891139 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.909210 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:32:42.706511565 +0000 UTC Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.909540 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.922438 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.934771 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.959866 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eba9a5cc-35c1-47ea-b225-1b57b40a5e0d-host\") pod \"node-ca-4dzfm\" (UID: \"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\") " pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.959907 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj7cq\" (UniqueName: \"kubernetes.io/projected/eba9a5cc-35c1-47ea-b225-1b57b40a5e0d-kube-api-access-fj7cq\") pod \"node-ca-4dzfm\" (UID: \"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\") " pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.959944 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/eba9a5cc-35c1-47ea-b225-1b57b40a5e0d-serviceca\") pod \"node-ca-4dzfm\" (UID: \"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\") " pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.960873 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/eba9a5cc-35c1-47ea-b225-1b57b40a5e0d-serviceca\") pod \"node-ca-4dzfm\" (UID: \"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\") " pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.960932 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eba9a5cc-35c1-47ea-b225-1b57b40a5e0d-host\") pod \"node-ca-4dzfm\" (UID: \"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\") " pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.964690 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:48Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:48 crc kubenswrapper[4717]: I0218 11:49:48.994784 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj7cq\" (UniqueName: \"kubernetes.io/projected/eba9a5cc-35c1-47ea-b225-1b57b40a5e0d-kube-api-access-fj7cq\") pod \"node-ca-4dzfm\" (UID: \"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\") " pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.027317 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.035460 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:49 crc kubenswrapper[4717]: E0218 11:49:49.035606 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.070410 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.095751 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4dzfm" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.105491 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: W0218 11:49:49.109300 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeba9a5cc_35c1_47ea_b225_1b57b40a5e0d.slice/crio-562be6effe2b349906218b2952517731373f419fe30edbb92e15054346b08daf WatchSource:0}: Error finding container 562be6effe2b349906218b2952517731373f419fe30edbb92e15054346b08daf: Status 404 returned error can't find the container with id 562be6effe2b349906218b2952517731373f419fe30edbb92e15054346b08daf Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.209650 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4dzfm" event={"ID":"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d","Type":"ContainerStarted","Data":"562be6effe2b349906218b2952517731373f419fe30edbb92e15054346b08daf"} Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.214966 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" event={"ID":"ed61105a-bc90-46a4-991f-466e6836d94d","Type":"ContainerStarted","Data":"a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830"} Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.222287 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102"} Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.222331 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40"} Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.222341 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c"} Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.222353 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42"} Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.222362 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df"} Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.236531 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.248223 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.259817 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.271498 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.305094 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.343914 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.385803 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.424534 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.464428 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.507633 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.551496 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.585677 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.622476 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.666537 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:49Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.910280 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:59:04.263026605 +0000 UTC Feb 18 11:49:49 crc kubenswrapper[4717]: I0218 11:49:49.960224 4717 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.036078 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.036183 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.036192 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.036316 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.037725 4717 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.226080 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4dzfm" event={"ID":"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d","Type":"ContainerStarted","Data":"38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5"} Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.227526 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3"} Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.229147 4717 generic.go:334] "Generic (PLEG): container finished" podID="ed61105a-bc90-46a4-991f-466e6836d94d" containerID="a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830" exitCode=0 Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.229209 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" event={"ID":"ed61105a-bc90-46a4-991f-466e6836d94d","Type":"ContainerDied","Data":"a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830"} Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.232632 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac"} Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.239391 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.259022 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.279944 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.295688 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.304869 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.318079 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.329151 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.341315 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.352764 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.363902 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.374651 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.385990 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.395315 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.406505 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.417935 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.427052 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.440323 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.454429 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.471545 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.474799 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.474888 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:49:54.47486948 +0000 UTC m=+28.876970796 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.504321 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.544011 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.575311 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.575357 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.575379 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.575408 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575483 4717 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575539 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:54.575522628 +0000 UTC m=+28.977623944 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575548 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575567 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575578 4717 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575628 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:54.575610051 +0000 UTC m=+28.977711367 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575692 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575704 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575713 4717 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575739 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:54.575731394 +0000 UTC m=+28.977832800 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575768 4717 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:49:50 crc kubenswrapper[4717]: E0218 11:49:50.575831 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:49:54.575823027 +0000 UTC m=+28.977924343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.585049 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.622836 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.663520 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.705128 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.746501 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.784456 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.823370 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:50Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:50 crc kubenswrapper[4717]: I0218 11:49:50.911244 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:34:49.814072176 +0000 UTC Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.036019 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:51 crc kubenswrapper[4717]: E0218 11:49:51.036154 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.237783 4717 generic.go:334] "Generic (PLEG): container finished" podID="ed61105a-bc90-46a4-991f-466e6836d94d" containerID="9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b" exitCode=0 Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.238009 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" event={"ID":"ed61105a-bc90-46a4-991f-466e6836d94d","Type":"ContainerDied","Data":"9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b"} Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.253743 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.265278 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.280370 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.294100 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.307686 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.327483 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.341338 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.351144 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.362806 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.375074 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.390116 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.404241 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.416041 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.429761 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:51Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:51 crc kubenswrapper[4717]: I0218 11:49:51.911987 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:57:13.926704805 +0000 UTC Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.036277 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.036295 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:52 crc kubenswrapper[4717]: E0218 11:49:52.036409 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:49:52 crc kubenswrapper[4717]: E0218 11:49:52.036475 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.243069 4717 generic.go:334] "Generic (PLEG): container finished" podID="ed61105a-bc90-46a4-991f-466e6836d94d" containerID="663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2" exitCode=0 Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.243147 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" event={"ID":"ed61105a-bc90-46a4-991f-466e6836d94d","Type":"ContainerDied","Data":"663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.249125 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.257662 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.270330 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.282737 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.298141 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.316827 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.326233 4717 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.331165 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.331209 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.331159 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.331221 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.331531 4717 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.339095 4717 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.339390 4717 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.340484 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.340553 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.340566 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.340583 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.340618 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.342391 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: E0218 11:49:52.353340 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.353733 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.356732 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.356774 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.356786 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.356799 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.356808 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.364725 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: E0218 11:49:52.371064 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.374010 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.374039 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.374047 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.374059 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.374068 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.377177 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: E0218 11:49:52.384918 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.388648 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.388859 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.388874 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.388881 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.388893 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.388901 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.400607 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: E0218 11:49:52.400613 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.404786 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.404828 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.404836 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.404851 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.404860 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.413141 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: E0218 11:49:52.417792 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: E0218 11:49:52.417927 4717 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.419402 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.419435 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.419445 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.419458 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.419467 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.426764 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:52Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.521929 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.521972 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.521982 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.521995 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.522004 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.624464 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.624514 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.624527 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.624545 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.624557 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.727447 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.727491 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.727503 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.727520 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.727530 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.831252 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.831323 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.831332 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.831347 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.831359 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.913519 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:41:38.11244943 +0000 UTC Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.933046 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.933077 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.933086 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.933099 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:52 crc kubenswrapper[4717]: I0218 11:49:52.933112 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:52Z","lastTransitionTime":"2026-02-18T11:49:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.035458 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.035532 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.035566 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.035574 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.035587 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.035596 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:53Z","lastTransitionTime":"2026-02-18T11:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:53 crc kubenswrapper[4717]: E0218 11:49:53.035601 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.141183 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.141247 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.141272 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.141294 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.141306 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:53Z","lastTransitionTime":"2026-02-18T11:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.242916 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.242959 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.242970 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.242987 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.242998 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:53Z","lastTransitionTime":"2026-02-18T11:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.254312 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" event={"ID":"ed61105a-bc90-46a4-991f-466e6836d94d","Type":"ContainerStarted","Data":"63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089"} Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.267727 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.278156 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.288343 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.302850 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.317413 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.331321 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.344925 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.345831 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.345952 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.346216 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.346353 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.346594 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:53Z","lastTransitionTime":"2026-02-18T11:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.359183 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.368591 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.378704 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.391726 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.409350 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.421398 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.433231 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.448957 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.449240 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.449381 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.449486 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.449669 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:53Z","lastTransitionTime":"2026-02-18T11:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.552550 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.552770 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.552834 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.552902 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.552959 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:53Z","lastTransitionTime":"2026-02-18T11:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.655061 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.655095 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.655104 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.655117 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.655127 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:53Z","lastTransitionTime":"2026-02-18T11:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.757420 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.757501 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.757516 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.757532 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.757542 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:53Z","lastTransitionTime":"2026-02-18T11:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.859558 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.859814 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.859920 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.860070 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.860153 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:53Z","lastTransitionTime":"2026-02-18T11:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.914592 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:01:19.293714419 +0000 UTC Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.962112 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.962333 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.962415 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.962482 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:53 crc kubenswrapper[4717]: I0218 11:49:53.962541 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:53Z","lastTransitionTime":"2026-02-18T11:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.036034 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.036493 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.037337 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.037530 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.064831 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.064864 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.064876 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.064891 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.064902 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:54Z","lastTransitionTime":"2026-02-18T11:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.166832 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.166974 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.167154 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.167438 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.167674 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:54Z","lastTransitionTime":"2026-02-18T11:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.265370 4717 generic.go:334] "Generic (PLEG): container finished" podID="ed61105a-bc90-46a4-991f-466e6836d94d" containerID="63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089" exitCode=0 Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.265423 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" event={"ID":"ed61105a-bc90-46a4-991f-466e6836d94d","Type":"ContainerDied","Data":"63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.270660 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.270814 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.271138 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.271242 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.271339 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:54Z","lastTransitionTime":"2026-02-18T11:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.271041 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.271678 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.271868 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.282915 4717 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.285579 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.297892 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.312662 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.312988 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.315851 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.332839 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.351014 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.364750 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.376024 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.376073 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.376087 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.376110 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.376066 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.376125 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:54Z","lastTransitionTime":"2026-02-18T11:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.389837 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.403965 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.418950 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.431108 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.446523 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.463919 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.478449 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.478836 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.478875 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.478887 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.478907 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.478918 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:54Z","lastTransitionTime":"2026-02-18T11:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.493670 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.510485 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.517434 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.517687 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:50:02.517667244 +0000 UTC m=+36.919768560 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.527030 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.541476 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.555150 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.566549 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.578758 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.581900 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.581927 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.581936 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.581954 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.581968 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:54Z","lastTransitionTime":"2026-02-18T11:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.597811 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.618646 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.618958 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.619009 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.619045 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.619069 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619085 4717 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619182 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:02.619158856 +0000 UTC m=+37.021260172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619277 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619300 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619314 4717 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619316 4717 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619392 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:02.619373942 +0000 UTC m=+37.021475258 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619434 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:02.619403333 +0000 UTC m=+37.021504839 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619337 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619473 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619492 4717 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:54 crc kubenswrapper[4717]: E0218 11:49:54.619542 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:02.619532876 +0000 UTC m=+37.021634412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.632594 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.644052 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.655562 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.670344 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.683095 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:54Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.684358 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.684396 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.684407 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.684426 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.684440 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:54Z","lastTransitionTime":"2026-02-18T11:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.786747 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.786797 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.786806 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.786820 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.786829 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:54Z","lastTransitionTime":"2026-02-18T11:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.889424 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.889642 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.889743 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.890597 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.890645 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:54Z","lastTransitionTime":"2026-02-18T11:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.916363 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 23:12:27.106432916 +0000 UTC Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.994480 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.994759 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.994845 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.994937 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:54 crc kubenswrapper[4717]: I0218 11:49:54.994997 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:54Z","lastTransitionTime":"2026-02-18T11:49:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.036535 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:55 crc kubenswrapper[4717]: E0218 11:49:55.036698 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.097077 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.097383 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.097460 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.097539 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.097608 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:55Z","lastTransitionTime":"2026-02-18T11:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.200296 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.200328 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.200336 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.200350 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.200360 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:55Z","lastTransitionTime":"2026-02-18T11:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.277112 4717 generic.go:334] "Generic (PLEG): container finished" podID="ed61105a-bc90-46a4-991f-466e6836d94d" containerID="6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88" exitCode=0 Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.277166 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" event={"ID":"ed61105a-bc90-46a4-991f-466e6836d94d","Type":"ContainerDied","Data":"6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88"} Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.277617 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.295138 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.302700 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.302734 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.302742 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.302756 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.302765 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:55Z","lastTransitionTime":"2026-02-18T11:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.306132 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.316966 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.333927 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.353009 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.366167 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.382351 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.397219 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.405919 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.405960 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.405970 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.405987 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.405998 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:55Z","lastTransitionTime":"2026-02-18T11:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.408402 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.419442 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.435084 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.451065 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.467630 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.488165 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:55Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.508017 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.508064 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.508075 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.508092 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.508105 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:55Z","lastTransitionTime":"2026-02-18T11:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.610094 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.610148 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.610161 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.610177 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.610205 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:55Z","lastTransitionTime":"2026-02-18T11:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.713232 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.713282 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.713291 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.713304 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.713323 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:55Z","lastTransitionTime":"2026-02-18T11:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.815868 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.815912 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.815922 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.815939 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.815951 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:55Z","lastTransitionTime":"2026-02-18T11:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.916931 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:26:37.07273631 +0000 UTC Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.918647 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.918696 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.918712 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.918738 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:55 crc kubenswrapper[4717]: I0218 11:49:55.918756 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:55Z","lastTransitionTime":"2026-02-18T11:49:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.021014 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.021057 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.021067 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.021083 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.021092 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:56Z","lastTransitionTime":"2026-02-18T11:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.036392 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:56 crc kubenswrapper[4717]: E0218 11:49:56.036501 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.036395 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:56 crc kubenswrapper[4717]: E0218 11:49:56.036602 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.123099 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.123136 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.123146 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.123159 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.123169 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:56Z","lastTransitionTime":"2026-02-18T11:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.225041 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.225069 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.225078 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.225090 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.225099 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:56Z","lastTransitionTime":"2026-02-18T11:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.287792 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" event={"ID":"ed61105a-bc90-46a4-991f-466e6836d94d","Type":"ContainerStarted","Data":"ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.289014 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.301152 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.313959 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.325899 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.334940 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.334977 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.334990 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.335006 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.335016 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:56Z","lastTransitionTime":"2026-02-18T11:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.338061 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.350670 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.359922 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.371109 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.385192 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.408804 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.423410 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.441710 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.441749 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.441760 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.441780 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.441792 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:56Z","lastTransitionTime":"2026-02-18T11:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.444641 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.458808 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.471823 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.496202 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:56Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.544975 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.545018 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.545033 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.545057 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.545067 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:56Z","lastTransitionTime":"2026-02-18T11:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.647790 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.647830 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.647844 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.647858 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.647869 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:56Z","lastTransitionTime":"2026-02-18T11:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.750496 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.750535 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.750544 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.750559 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.750570 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:56Z","lastTransitionTime":"2026-02-18T11:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.853649 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.853704 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.853716 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.853739 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.853752 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:56Z","lastTransitionTime":"2026-02-18T11:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.917717 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:14:53.367705157 +0000 UTC Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.956149 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.956191 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.956200 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.956215 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:56 crc kubenswrapper[4717]: I0218 11:49:56.956225 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:56Z","lastTransitionTime":"2026-02-18T11:49:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.035922 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:57 crc kubenswrapper[4717]: E0218 11:49:57.036059 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.054385 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.059041 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.059105 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.059118 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.059136 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.059154 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:57Z","lastTransitionTime":"2026-02-18T11:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.062462 4717 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.071747 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.087416 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.101063 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.114122 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.130217 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.147447 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.161453 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.161509 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.161524 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.161545 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.161559 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:57Z","lastTransitionTime":"2026-02-18T11:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.163424 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.175311 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.188088 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.205029 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.224062 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.239280 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.251723 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.264781 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.264836 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.264847 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.264862 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.264872 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:57Z","lastTransitionTime":"2026-02-18T11:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.301666 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/0.log" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.306102 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c" exitCode=1 Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.306151 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.306758 4717 scope.go:117] "RemoveContainer" containerID="7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.319583 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.332228 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.351889 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.367767 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.367824 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.367833 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.367849 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.367859 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:57Z","lastTransitionTime":"2026-02-18T11:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.368553 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.387691 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:57Z\\\",\\\"message\\\":\\\"gressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080120 5963 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080158 5963 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:49:57.080302 5963 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 11:49:57.080712 5963 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081297 5963 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081496 5963 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.082118 5963 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.405052 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.419466 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.434737 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.449787 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.461686 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.471531 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.471579 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.471595 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.471617 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.471631 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:57Z","lastTransitionTime":"2026-02-18T11:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.475991 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.490287 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.510076 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.523964 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.574484 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.574555 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.574569 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.574593 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.574607 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:57Z","lastTransitionTime":"2026-02-18T11:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.677690 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.677730 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.677742 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.677760 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.677772 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:57Z","lastTransitionTime":"2026-02-18T11:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.779896 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.779945 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.779961 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.779982 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.779998 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:57Z","lastTransitionTime":"2026-02-18T11:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.882849 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.882891 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.882903 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.882953 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.882964 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:57Z","lastTransitionTime":"2026-02-18T11:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.918595 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:18:17.894787593 +0000 UTC Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.996543 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.996593 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.996605 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.996623 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:57 crc kubenswrapper[4717]: I0218 11:49:57.996636 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:57Z","lastTransitionTime":"2026-02-18T11:49:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.035912 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.035931 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:49:58 crc kubenswrapper[4717]: E0218 11:49:58.036078 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:49:58 crc kubenswrapper[4717]: E0218 11:49:58.036210 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.098882 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.098938 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.098948 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.098968 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.098980 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:58Z","lastTransitionTime":"2026-02-18T11:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.221028 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.221069 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.221080 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.221094 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.221107 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:58Z","lastTransitionTime":"2026-02-18T11:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.311498 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/0.log" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.313410 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb"} Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.313538 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.322919 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.322961 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.322970 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.322983 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.322994 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:58Z","lastTransitionTime":"2026-02-18T11:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.327068 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.339542 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.352660 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.366125 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.378177 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.389633 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.401919 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.412951 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.423644 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.425037 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.425126 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.425137 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.425151 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.425160 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:58Z","lastTransitionTime":"2026-02-18T11:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.435661 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.452589 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:57Z\\\",\\\"message\\\":\\\"gressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080120 5963 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080158 5963 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:49:57.080302 5963 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 11:49:57.080712 5963 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081297 5963 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081496 5963 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.082118 5963 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.464680 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.476405 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.492002 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:58Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.527903 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.527936 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.527944 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.527958 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.527967 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:58Z","lastTransitionTime":"2026-02-18T11:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.630119 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.630164 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.630172 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.630187 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.630197 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:58Z","lastTransitionTime":"2026-02-18T11:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.732833 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.732866 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.732884 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.732901 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.732912 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:58Z","lastTransitionTime":"2026-02-18T11:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.835705 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.835746 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.835755 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.835772 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.835785 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:58Z","lastTransitionTime":"2026-02-18T11:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.919202 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:39:44.565880395 +0000 UTC Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.941604 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.941641 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.941650 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.941663 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:58 crc kubenswrapper[4717]: I0218 11:49:58.941673 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:58Z","lastTransitionTime":"2026-02-18T11:49:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.035935 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:49:59 crc kubenswrapper[4717]: E0218 11:49:59.036516 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.043300 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.043353 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.043363 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.043374 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.043383 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:59Z","lastTransitionTime":"2026-02-18T11:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.145225 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.145281 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.145292 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.145306 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.145316 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:59Z","lastTransitionTime":"2026-02-18T11:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.247599 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.247630 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.247638 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.247652 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.247661 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:59Z","lastTransitionTime":"2026-02-18T11:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.318094 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/1.log" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.319383 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/0.log" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.322467 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb" exitCode=1 Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.322710 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.322767 4717 scope.go:117] "RemoveContainer" containerID="7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.323767 4717 scope.go:117] "RemoveContainer" containerID="cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb" Feb 18 11:49:59 crc kubenswrapper[4717]: E0218 11:49:59.323929 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.339719 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.350042 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.350086 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.350095 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.350110 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.350122 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:59Z","lastTransitionTime":"2026-02-18T11:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.357445 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.371748 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.384362 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.396888 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.409298 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.421405 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.440768 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.441182 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm"] Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.441742 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.443597 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.444480 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.452904 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.452958 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.452974 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.452992 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.453004 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:59Z","lastTransitionTime":"2026-02-18T11:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.462590 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:57Z\\\",\\\"message\\\":\\\"gressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080120 5963 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080158 5963 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:49:57.080302 5963 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 11:49:57.080712 5963 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081297 5963 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081496 5963 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.082118 5963 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:58Z\\\",\\\"message\\\":\\\"72c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 11:49:58.328737 6129 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:49:58.328806 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.476299 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.489108 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.502133 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.515584 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.526398 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.540627 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.551520 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.555081 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.555107 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.555115 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.555131 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.555140 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:59Z","lastTransitionTime":"2026-02-18T11:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.560517 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.572636 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a92f4826-4ec6-4676-977c-fdf3552b9ea5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.572701 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a92f4826-4ec6-4676-977c-fdf3552b9ea5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.572725 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvjl\" (UniqueName: \"kubernetes.io/projected/a92f4826-4ec6-4676-977c-fdf3552b9ea5-kube-api-access-7cvjl\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.572755 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a92f4826-4ec6-4676-977c-fdf3552b9ea5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.572697 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.584993 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.595374 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.606079 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.618935 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.630455 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.642844 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.654670 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.660600 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.660637 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.660647 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.660660 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.660669 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:59Z","lastTransitionTime":"2026-02-18T11:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.673305 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a92f4826-4ec6-4676-977c-fdf3552b9ea5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.673375 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvjl\" (UniqueName: \"kubernetes.io/projected/a92f4826-4ec6-4676-977c-fdf3552b9ea5-kube-api-access-7cvjl\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.673419 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a92f4826-4ec6-4676-977c-fdf3552b9ea5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.673448 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a92f4826-4ec6-4676-977c-fdf3552b9ea5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.673984 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a92f4826-4ec6-4676-977c-fdf3552b9ea5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.674249 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a92f4826-4ec6-4676-977c-fdf3552b9ea5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.677137 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:57Z\\\",\\\"message\\\":\\\"gressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080120 5963 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080158 5963 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:49:57.080302 5963 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 11:49:57.080712 5963 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081297 5963 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081496 5963 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.082118 5963 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:58Z\\\",\\\"message\\\":\\\"72c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 11:49:58.328737 6129 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:49:58.328806 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.679599 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a92f4826-4ec6-4676-977c-fdf3552b9ea5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.691381 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.694893 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvjl\" (UniqueName: \"kubernetes.io/projected/a92f4826-4ec6-4676-977c-fdf3552b9ea5-kube-api-access-7cvjl\") pod \"ovnkube-control-plane-749d76644c-nqztm\" (UID: \"a92f4826-4ec6-4676-977c-fdf3552b9ea5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.705241 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.720821 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.756087 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.762553 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.762604 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.762614 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.762631 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.762642 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:59Z","lastTransitionTime":"2026-02-18T11:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.788579 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.802738 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.814755 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.828621 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.848227 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:57Z\\\",\\\"message\\\":\\\"gressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080120 5963 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080158 5963 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:49:57.080302 5963 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 11:49:57.080712 5963 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081297 5963 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081496 5963 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.082118 5963 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:58Z\\\",\\\"message\\\":\\\"72c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 11:49:58.328737 6129 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:49:58.328806 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.860450 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.865956 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.865996 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.866007 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.866023 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.866066 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:59Z","lastTransitionTime":"2026-02-18T11:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.874309 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.886860 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.901221 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.913717 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.919777 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:35:41.014850725 +0000 UTC Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.930639 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.944571 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.959252 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.968299 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.968335 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.968345 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.968362 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.968373 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:49:59Z","lastTransitionTime":"2026-02-18T11:49:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.973657 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:49:59 crc kubenswrapper[4717]: I0218 11:49:59.990360 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:49:59Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.004244 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.035465 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:00 crc kubenswrapper[4717]: E0218 11:50:00.035602 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.035957 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:00 crc kubenswrapper[4717]: E0218 11:50:00.036025 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.071017 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.071053 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.071064 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.071078 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.071088 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:00Z","lastTransitionTime":"2026-02-18T11:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.175670 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.175723 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.175732 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.175748 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.175762 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:00Z","lastTransitionTime":"2026-02-18T11:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.187584 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gxzpl"] Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.188107 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:00 crc kubenswrapper[4717]: E0218 11:50:00.188178 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.202554 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.217721 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.229798 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.249046 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.260231 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.273107 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.276839 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.276935 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj2pm\" (UniqueName: \"kubernetes.io/projected/a549f413-5b44-4fac-a21e-4f41cc30fbe6-kube-api-access-vj2pm\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.278132 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.278173 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.278185 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.278202 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.278215 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:00Z","lastTransitionTime":"2026-02-18T11:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.291415 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:57Z\\\",\\\"message\\\":\\\"gressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080120 5963 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080158 5963 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:49:57.080302 5963 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 11:49:57.080712 5963 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081297 5963 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081496 5963 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.082118 5963 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:58Z\\\",\\\"message\\\":\\\"72c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 11:49:58.328737 6129 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:49:58.328806 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.306243 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.316149 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.327615 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.327924 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" event={"ID":"a92f4826-4ec6-4676-977c-fdf3552b9ea5","Type":"ContainerStarted","Data":"3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.327976 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" event={"ID":"a92f4826-4ec6-4676-977c-fdf3552b9ea5","Type":"ContainerStarted","Data":"a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.327989 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" event={"ID":"a92f4826-4ec6-4676-977c-fdf3552b9ea5","Type":"ContainerStarted","Data":"d2e6be15355d8290e8e3ccf6407d84fb2c99778153ddf22da936fb499d155b8c"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.329463 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/1.log" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.344141 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.360758 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.373144 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.377508 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj2pm\" (UniqueName: \"kubernetes.io/projected/a549f413-5b44-4fac-a21e-4f41cc30fbe6-kube-api-access-vj2pm\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.377574 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:00 crc kubenswrapper[4717]: E0218 11:50:00.377677 4717 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:00 crc kubenswrapper[4717]: E0218 11:50:00.377735 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs podName:a549f413-5b44-4fac-a21e-4f41cc30fbe6 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:00.877711315 +0000 UTC m=+35.279812651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs") pod "network-metrics-daemon-gxzpl" (UID: "a549f413-5b44-4fac-a21e-4f41cc30fbe6") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.379943 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.379979 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.379991 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.380006 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.380016 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:00Z","lastTransitionTime":"2026-02-18T11:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.386468 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.395160 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj2pm\" (UniqueName: \"kubernetes.io/projected/a549f413-5b44-4fac-a21e-4f41cc30fbe6-kube-api-access-vj2pm\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.397441 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.409947 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.422315 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.433892 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.445108 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.456244 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.469627 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.482362 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.482504 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.482918 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.482955 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.482969 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:00Z","lastTransitionTime":"2026-02-18T11:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.485998 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.498642 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.510910 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.520608 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.533439 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.541921 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.553557 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.568286 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.585967 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.586019 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.586032 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.586051 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.586064 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:00Z","lastTransitionTime":"2026-02-18T11:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.588573 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7768fb2e1949b605c8ff7e05af279c46c4ccbdb430a49e80c47aafab28ca5a6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:57Z\\\",\\\"message\\\":\\\"gressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080120 5963 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 11:49:57.080158 5963 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 11:49:57.080302 5963 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0218 11:49:57.080712 5963 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081297 5963 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.081496 5963 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 11:49:57.082118 5963 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:58Z\\\",\\\"message\\\":\\\"72c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 11:49:58.328737 6129 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:49:58.328806 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.600374 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.611647 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:00Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.688069 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.688116 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.688126 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.688142 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.688154 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:00Z","lastTransitionTime":"2026-02-18T11:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.791397 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.791468 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.791484 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.791508 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.791529 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:00Z","lastTransitionTime":"2026-02-18T11:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.883712 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:00 crc kubenswrapper[4717]: E0218 11:50:00.884000 4717 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:00 crc kubenswrapper[4717]: E0218 11:50:00.884148 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs podName:a549f413-5b44-4fac-a21e-4f41cc30fbe6 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:01.884118254 +0000 UTC m=+36.286219760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs") pod "network-metrics-daemon-gxzpl" (UID: "a549f413-5b44-4fac-a21e-4f41cc30fbe6") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.895422 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.895495 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.895510 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.895529 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.895539 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:00Z","lastTransitionTime":"2026-02-18T11:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.920107 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:49:59.974515003 +0000 UTC Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.998040 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.998078 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.998089 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.998104 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:00 crc kubenswrapper[4717]: I0218 11:50:00.998116 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:00Z","lastTransitionTime":"2026-02-18T11:50:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.035529 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:01 crc kubenswrapper[4717]: E0218 11:50:01.035655 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.100415 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.100458 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.100467 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.100481 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.100492 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:01Z","lastTransitionTime":"2026-02-18T11:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.203318 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.203377 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.203389 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.203408 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.203419 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:01Z","lastTransitionTime":"2026-02-18T11:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.305825 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.305867 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.305878 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.305897 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.305910 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:01Z","lastTransitionTime":"2026-02-18T11:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.408541 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.408600 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.408611 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.408632 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.408646 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:01Z","lastTransitionTime":"2026-02-18T11:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.511334 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.511405 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.511418 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.511436 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.511449 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:01Z","lastTransitionTime":"2026-02-18T11:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.613603 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.613653 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.613666 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.613682 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.613695 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:01Z","lastTransitionTime":"2026-02-18T11:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.716701 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.716745 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.716761 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.716783 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.716798 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:01Z","lastTransitionTime":"2026-02-18T11:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.819545 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.819586 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.819597 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.819612 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.819621 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:01Z","lastTransitionTime":"2026-02-18T11:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.900586 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:01 crc kubenswrapper[4717]: E0218 11:50:01.900742 4717 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:01 crc kubenswrapper[4717]: E0218 11:50:01.900794 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs podName:a549f413-5b44-4fac-a21e-4f41cc30fbe6 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:03.900780717 +0000 UTC m=+38.302882033 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs") pod "network-metrics-daemon-gxzpl" (UID: "a549f413-5b44-4fac-a21e-4f41cc30fbe6") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.920338 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:39:32.735575981 +0000 UTC Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.921760 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.921799 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.921807 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.921822 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:01 crc kubenswrapper[4717]: I0218 11:50:01.921832 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:01Z","lastTransitionTime":"2026-02-18T11:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.024323 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.024742 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.024759 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.024775 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.024784 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.035898 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.036029 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.035898 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.036124 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.035904 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.036205 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.127875 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.127913 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.127923 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.127940 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.127952 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.229903 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.229928 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.229936 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.229948 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.229957 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.332360 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.332431 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.332449 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.332471 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.332488 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.435058 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.435116 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.435124 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.435139 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.435151 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.538071 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.538110 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.538123 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.538139 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.538150 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.608771 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.608948 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:50:18.608927962 +0000 UTC m=+53.011029288 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.628936 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.628993 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.629010 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.629032 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.629049 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.645025 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.650077 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.650137 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.650160 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.650188 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.650209 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.673189 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.677820 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.677851 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.677858 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.677871 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.677879 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.691207 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.695047 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.695089 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.695152 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.695388 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.695451 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.709682 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.709725 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.709748 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.709765 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.709878 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.709893 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.709902 4717 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.709944 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:18.70993008 +0000 UTC m=+53.112031396 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.709958 4717 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.710038 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.710055 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:18.710030613 +0000 UTC m=+53.112131969 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.710068 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.710088 4717 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.710143 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:18.710128866 +0000 UTC m=+53.112230202 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.710242 4717 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.710393 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:18.710344612 +0000 UTC m=+53.112445938 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.710817 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.715055 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.715107 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.715123 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.715142 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.715156 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.732607 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:02Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:02 crc kubenswrapper[4717]: E0218 11:50:02.732814 4717 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.734126 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.734162 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.734173 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.734195 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.734209 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.836217 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.836249 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.836277 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.836293 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.836304 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.921383 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:01:24.1662233 +0000 UTC Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.938220 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.938271 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.938280 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.938293 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:02 crc kubenswrapper[4717]: I0218 11:50:02.938302 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:02Z","lastTransitionTime":"2026-02-18T11:50:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.036485 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:03 crc kubenswrapper[4717]: E0218 11:50:03.036635 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.041947 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.041995 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.042010 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.042030 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.042043 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:03Z","lastTransitionTime":"2026-02-18T11:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.144783 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.144824 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.144832 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.144846 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.144856 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:03Z","lastTransitionTime":"2026-02-18T11:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.247413 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.247454 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.247464 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.247479 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.247490 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:03Z","lastTransitionTime":"2026-02-18T11:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.350418 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.350459 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.350472 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.350486 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.350497 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:03Z","lastTransitionTime":"2026-02-18T11:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.452728 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.452760 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.452768 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.452780 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.452788 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:03Z","lastTransitionTime":"2026-02-18T11:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.554954 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.554995 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.555008 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.555024 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.555036 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:03Z","lastTransitionTime":"2026-02-18T11:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.657359 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.657404 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.657417 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.657437 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.657451 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:03Z","lastTransitionTime":"2026-02-18T11:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.665507 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.666245 4717 scope.go:117] "RemoveContainer" containerID="cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb" Feb 18 11:50:03 crc kubenswrapper[4717]: E0218 11:50:03.666504 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.679580 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.704540 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.723520 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.753783 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.759196 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.759232 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.759244 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.759279 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.759293 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:03Z","lastTransitionTime":"2026-02-18T11:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.775511 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.791825 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.803201 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.815006 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.831026 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.849430 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:58Z\\\",\\\"message\\\":\\\"72c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 11:49:58.328737 6129 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:49:58.328806 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.861286 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.861321 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.861330 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.861344 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.861354 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:03Z","lastTransitionTime":"2026-02-18T11:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.862516 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.874937 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.888875 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.900920 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.913025 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.921565 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.921605 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:03:35.767889881 +0000 UTC Feb 18 11:50:03 crc kubenswrapper[4717]: E0218 11:50:03.921706 4717 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:03 crc kubenswrapper[4717]: E0218 11:50:03.921875 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs podName:a549f413-5b44-4fac-a21e-4f41cc30fbe6 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:07.9218407 +0000 UTC m=+42.323942016 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs") pod "network-metrics-daemon-gxzpl" (UID: "a549f413-5b44-4fac-a21e-4f41cc30fbe6") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.924818 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:03Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.964224 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.964294 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.964306 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.964326 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:03 crc kubenswrapper[4717]: I0218 11:50:03.964342 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:03Z","lastTransitionTime":"2026-02-18T11:50:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.035850 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.035903 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.035866 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:04 crc kubenswrapper[4717]: E0218 11:50:04.036013 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:04 crc kubenswrapper[4717]: E0218 11:50:04.036072 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:04 crc kubenswrapper[4717]: E0218 11:50:04.036161 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.066787 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.066818 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.066829 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.066844 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.066856 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:04Z","lastTransitionTime":"2026-02-18T11:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.169665 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.170167 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.170182 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.170199 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.170211 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:04Z","lastTransitionTime":"2026-02-18T11:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.272511 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.272751 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.272855 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.272928 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.272985 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:04Z","lastTransitionTime":"2026-02-18T11:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.374791 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.374839 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.374852 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.374870 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.374883 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:04Z","lastTransitionTime":"2026-02-18T11:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.477223 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.477285 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.477294 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.477308 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.477317 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:04Z","lastTransitionTime":"2026-02-18T11:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.579734 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.579782 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.579794 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.579808 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.579817 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:04Z","lastTransitionTime":"2026-02-18T11:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.682468 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.682512 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.682521 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.682534 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.682545 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:04Z","lastTransitionTime":"2026-02-18T11:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.784531 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.784662 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.784676 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.784690 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.784701 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:04Z","lastTransitionTime":"2026-02-18T11:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.886939 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.886982 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.886994 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.887014 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.887031 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:04Z","lastTransitionTime":"2026-02-18T11:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.922634 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:35:47.290259918 +0000 UTC Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.990371 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.990413 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.990427 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.990446 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:04 crc kubenswrapper[4717]: I0218 11:50:04.990458 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:04Z","lastTransitionTime":"2026-02-18T11:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.035833 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:05 crc kubenswrapper[4717]: E0218 11:50:05.035995 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.093192 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.093279 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.093291 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.093310 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.093351 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:05Z","lastTransitionTime":"2026-02-18T11:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.196570 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.196622 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.196638 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.196662 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.196679 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:05Z","lastTransitionTime":"2026-02-18T11:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.300545 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.300592 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.300604 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.300621 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.300632 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:05Z","lastTransitionTime":"2026-02-18T11:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.403960 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.404016 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.404030 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.404053 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.404068 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:05Z","lastTransitionTime":"2026-02-18T11:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.507320 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.507619 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.507758 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.507848 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.507936 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:05Z","lastTransitionTime":"2026-02-18T11:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.611504 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.611555 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.611566 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.611584 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.611597 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:05Z","lastTransitionTime":"2026-02-18T11:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.715583 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.715651 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.715660 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.715677 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.715687 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:05Z","lastTransitionTime":"2026-02-18T11:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.817970 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.818054 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.818066 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.818085 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.818102 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:05Z","lastTransitionTime":"2026-02-18T11:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.920818 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.920867 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.920878 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.920898 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.920910 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:05Z","lastTransitionTime":"2026-02-18T11:50:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:05 crc kubenswrapper[4717]: I0218 11:50:05.923154 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:30:21.156546505 +0000 UTC Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.023874 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.023923 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.023935 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.023951 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.023962 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:06Z","lastTransitionTime":"2026-02-18T11:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.036554 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.036567 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:06 crc kubenswrapper[4717]: E0218 11:50:06.036709 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:06 crc kubenswrapper[4717]: E0218 11:50:06.036834 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.036598 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:06 crc kubenswrapper[4717]: E0218 11:50:06.036946 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.127081 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.127565 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.127714 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.127810 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.127880 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:06Z","lastTransitionTime":"2026-02-18T11:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.229928 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.230247 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.230451 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.230707 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.230866 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:06Z","lastTransitionTime":"2026-02-18T11:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.334162 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.334498 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.334601 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.334696 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.334780 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:06Z","lastTransitionTime":"2026-02-18T11:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.437825 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.437885 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.437904 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.437929 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.437953 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:06Z","lastTransitionTime":"2026-02-18T11:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.541139 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.541204 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.541224 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.541251 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.541314 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:06Z","lastTransitionTime":"2026-02-18T11:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.644088 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.644157 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.644178 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.644201 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.644240 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:06Z","lastTransitionTime":"2026-02-18T11:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.747211 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.747294 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.747305 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.747318 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.747328 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:06Z","lastTransitionTime":"2026-02-18T11:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.849809 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.850091 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.850217 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.850353 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.850430 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:06Z","lastTransitionTime":"2026-02-18T11:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.923603 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:24:21.356148178 +0000 UTC Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.953046 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.953102 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.953110 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.953127 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:06 crc kubenswrapper[4717]: I0218 11:50:06.953137 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:06Z","lastTransitionTime":"2026-02-18T11:50:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.036170 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:07 crc kubenswrapper[4717]: E0218 11:50:07.036371 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.056455 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.056501 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.056512 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.056532 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.056546 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:07Z","lastTransitionTime":"2026-02-18T11:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.060287 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.074580 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.086660 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.098699 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.112711 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.128850 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.145105 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.158291 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.158331 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.158341 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.158355 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.158364 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:07Z","lastTransitionTime":"2026-02-18T11:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.161100 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.175576 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.192428 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.205442 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.219207 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.234972 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.260144 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.260415 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.260157 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:58Z\\\",\\\"message\\\":\\\"72c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 11:49:58.328737 6129 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:49:58.328806 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.260511 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.260688 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.260702 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:07Z","lastTransitionTime":"2026-02-18T11:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.278658 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.290820 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.397627 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.397665 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.397678 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.397697 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.397707 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:07Z","lastTransitionTime":"2026-02-18T11:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.501455 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.501797 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.501895 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.502046 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.502137 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:07Z","lastTransitionTime":"2026-02-18T11:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.604558 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.605036 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.605112 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.605180 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.605300 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:07Z","lastTransitionTime":"2026-02-18T11:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.708160 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.708215 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.708227 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.708243 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.708255 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:07Z","lastTransitionTime":"2026-02-18T11:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.810876 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.810912 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.810922 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.810939 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.810950 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:07Z","lastTransitionTime":"2026-02-18T11:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.913231 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.913337 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.913351 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.913389 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.913403 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:07Z","lastTransitionTime":"2026-02-18T11:50:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.924798 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 18:36:53.887204063 +0000 UTC Feb 18 11:50:07 crc kubenswrapper[4717]: I0218 11:50:07.970451 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:07 crc kubenswrapper[4717]: E0218 11:50:07.970578 4717 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:07 crc kubenswrapper[4717]: E0218 11:50:07.970651 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs podName:a549f413-5b44-4fac-a21e-4f41cc30fbe6 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:15.970631723 +0000 UTC m=+50.372733039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs") pod "network-metrics-daemon-gxzpl" (UID: "a549f413-5b44-4fac-a21e-4f41cc30fbe6") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.015623 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.015660 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.015673 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.015689 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.015700 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:08Z","lastTransitionTime":"2026-02-18T11:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.036113 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.036147 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.036161 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:08 crc kubenswrapper[4717]: E0218 11:50:08.036245 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:08 crc kubenswrapper[4717]: E0218 11:50:08.036375 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:08 crc kubenswrapper[4717]: E0218 11:50:08.036720 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.118337 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.118382 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.118392 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.118415 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.118436 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:08Z","lastTransitionTime":"2026-02-18T11:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.220137 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.220182 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.220194 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.220210 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.220222 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:08Z","lastTransitionTime":"2026-02-18T11:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.322159 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.322204 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.322212 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.322226 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.322236 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:08Z","lastTransitionTime":"2026-02-18T11:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.424893 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.424925 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.424933 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.424945 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.424954 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:08Z","lastTransitionTime":"2026-02-18T11:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.528439 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.528501 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.528519 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.528542 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.528557 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:08Z","lastTransitionTime":"2026-02-18T11:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.630918 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.630998 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.631021 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.631053 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.631081 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:08Z","lastTransitionTime":"2026-02-18T11:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.733779 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.733868 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.734068 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.734115 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.734156 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:08Z","lastTransitionTime":"2026-02-18T11:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.837658 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.837765 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.837795 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.837835 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.837859 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:08Z","lastTransitionTime":"2026-02-18T11:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.925759 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:13:00.038964718 +0000 UTC Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.941459 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.941510 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.941530 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.941559 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:08 crc kubenswrapper[4717]: I0218 11:50:08.941579 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:08Z","lastTransitionTime":"2026-02-18T11:50:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.035640 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:09 crc kubenswrapper[4717]: E0218 11:50:09.035824 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.043557 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.043606 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.043620 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.043641 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.043659 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:09Z","lastTransitionTime":"2026-02-18T11:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.147711 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.147799 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.147825 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.147856 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.147879 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:09Z","lastTransitionTime":"2026-02-18T11:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.251840 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.251917 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.251931 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.251964 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.251980 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:09Z","lastTransitionTime":"2026-02-18T11:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.354763 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.354822 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.354836 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.354858 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.354869 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:09Z","lastTransitionTime":"2026-02-18T11:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.457893 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.457963 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.457980 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.458005 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.458019 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:09Z","lastTransitionTime":"2026-02-18T11:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.560536 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.560581 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.560589 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.560614 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.560624 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:09Z","lastTransitionTime":"2026-02-18T11:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.663111 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.663164 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.663175 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.663194 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.663208 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:09Z","lastTransitionTime":"2026-02-18T11:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.765957 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.766013 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.766028 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.766051 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.766066 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:09Z","lastTransitionTime":"2026-02-18T11:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.868967 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.869022 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.869035 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.869057 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.869075 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:09Z","lastTransitionTime":"2026-02-18T11:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.926583 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:49:48.372736927 +0000 UTC Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.973087 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.973149 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.973162 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.973185 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:09 crc kubenswrapper[4717]: I0218 11:50:09.973199 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:09Z","lastTransitionTime":"2026-02-18T11:50:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.035999 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.036108 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:10 crc kubenswrapper[4717]: E0218 11:50:10.036231 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.036473 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:10 crc kubenswrapper[4717]: E0218 11:50:10.036422 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:10 crc kubenswrapper[4717]: E0218 11:50:10.036665 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.076452 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.076759 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.076892 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.077027 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.077170 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:10Z","lastTransitionTime":"2026-02-18T11:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.180569 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.180609 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.180620 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.180637 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.180648 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:10Z","lastTransitionTime":"2026-02-18T11:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.283789 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.283843 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.283862 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.283884 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.283898 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:10Z","lastTransitionTime":"2026-02-18T11:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.387048 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.387609 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.387689 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.387813 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.387898 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:10Z","lastTransitionTime":"2026-02-18T11:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.491092 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.491198 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.491212 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.491231 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.491244 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:10Z","lastTransitionTime":"2026-02-18T11:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.594836 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.594887 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.594901 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.594919 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.594930 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:10Z","lastTransitionTime":"2026-02-18T11:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.697585 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.697652 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.697667 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.697684 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.697696 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:10Z","lastTransitionTime":"2026-02-18T11:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.800237 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.800301 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.800314 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.800331 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.800344 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:10Z","lastTransitionTime":"2026-02-18T11:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.903127 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.903168 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.903177 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.903193 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.903205 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:10Z","lastTransitionTime":"2026-02-18T11:50:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:10 crc kubenswrapper[4717]: I0218 11:50:10.927796 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:09:49.537006743 +0000 UTC Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.005832 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.005922 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.005943 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.005979 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.006002 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:11Z","lastTransitionTime":"2026-02-18T11:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.036661 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:11 crc kubenswrapper[4717]: E0218 11:50:11.036790 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.108469 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.108535 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.108552 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.108574 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.108586 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:11Z","lastTransitionTime":"2026-02-18T11:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.210965 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.211014 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.211027 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.211046 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.211062 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:11Z","lastTransitionTime":"2026-02-18T11:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.314699 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.314767 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.314783 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.314807 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.314823 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:11Z","lastTransitionTime":"2026-02-18T11:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.417040 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.417115 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.417127 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.417146 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.417159 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:11Z","lastTransitionTime":"2026-02-18T11:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.520143 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.520201 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.520217 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.520236 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.520248 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:11Z","lastTransitionTime":"2026-02-18T11:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.623003 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.623062 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.623074 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.623096 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.623107 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:11Z","lastTransitionTime":"2026-02-18T11:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.726669 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.726746 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.726760 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.726782 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.726797 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:11Z","lastTransitionTime":"2026-02-18T11:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.830591 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.830642 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.830651 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.830667 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.830687 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:11Z","lastTransitionTime":"2026-02-18T11:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.928573 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:06:58.946828955 +0000 UTC Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.933470 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.933514 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.933527 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.933550 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:11 crc kubenswrapper[4717]: I0218 11:50:11.933565 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:11Z","lastTransitionTime":"2026-02-18T11:50:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.035468 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.035497 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.035593 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:12 crc kubenswrapper[4717]: E0218 11:50:12.035666 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:12 crc kubenswrapper[4717]: E0218 11:50:12.035844 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:12 crc kubenswrapper[4717]: E0218 11:50:12.036087 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.041574 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.041642 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.041665 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.041690 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.041713 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.144556 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.144599 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.144608 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.144621 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.144631 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.247665 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.247729 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.247744 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.247773 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.247790 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.349946 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.350012 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.350025 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.350042 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.350054 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.453035 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.453130 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.453153 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.453185 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.453223 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.556445 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.556523 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.556545 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.556574 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.556590 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.659682 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.659744 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.659754 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.659772 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.659784 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.763716 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.763759 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.763985 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.764003 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.764034 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.850463 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.850504 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.850514 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.850529 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.850540 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: E0218 11:50:12.861521 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.865141 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.865180 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.865191 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.865207 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.865218 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: E0218 11:50:12.877953 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.885962 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.886015 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.886031 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.886050 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.886063 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: E0218 11:50:12.900919 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.906119 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.906215 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.906233 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.906249 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.906283 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: E0218 11:50:12.918803 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.922395 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.922460 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.922474 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.922489 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.922501 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.929304 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:45:25.952232272 +0000 UTC Feb 18 11:50:12 crc kubenswrapper[4717]: E0218 11:50:12.934565 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:12Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:12 crc kubenswrapper[4717]: E0218 11:50:12.934731 4717 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.936818 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.936869 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.936880 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.936898 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:12 crc kubenswrapper[4717]: I0218 11:50:12.936910 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:12Z","lastTransitionTime":"2026-02-18T11:50:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.036039 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:13 crc kubenswrapper[4717]: E0218 11:50:13.036230 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.039925 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.039974 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.039985 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.040006 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.040019 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:13Z","lastTransitionTime":"2026-02-18T11:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.142505 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.142567 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.142580 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.142597 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.142608 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:13Z","lastTransitionTime":"2026-02-18T11:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.245616 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.245660 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.245674 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.245692 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.245705 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:13Z","lastTransitionTime":"2026-02-18T11:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.348389 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.348440 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.348453 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.348471 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.348486 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:13Z","lastTransitionTime":"2026-02-18T11:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.452519 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.452560 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.452572 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.452593 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.452605 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:13Z","lastTransitionTime":"2026-02-18T11:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.555169 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.555209 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.555220 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.555234 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.555245 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:13Z","lastTransitionTime":"2026-02-18T11:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.658240 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.658302 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.658314 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.658330 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.658342 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:13Z","lastTransitionTime":"2026-02-18T11:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.760695 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.760732 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.760743 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.760760 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.760771 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:13Z","lastTransitionTime":"2026-02-18T11:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.863474 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.863614 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.863624 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.863642 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.863652 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:13Z","lastTransitionTime":"2026-02-18T11:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.930221 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:42:51.693281947 +0000 UTC Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.966513 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.966776 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.966844 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.966944 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:13 crc kubenswrapper[4717]: I0218 11:50:13.967024 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:13Z","lastTransitionTime":"2026-02-18T11:50:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.035916 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.035979 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:14 crc kubenswrapper[4717]: E0218 11:50:14.036064 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:14 crc kubenswrapper[4717]: E0218 11:50:14.036148 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.036506 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:14 crc kubenswrapper[4717]: E0218 11:50:14.036887 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.037187 4717 scope.go:117] "RemoveContainer" containerID="cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.069440 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.069489 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.069502 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.069518 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.069530 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:14Z","lastTransitionTime":"2026-02-18T11:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.171742 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.171776 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.171788 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.171803 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.171814 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:14Z","lastTransitionTime":"2026-02-18T11:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.274269 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.274301 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.274310 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.274325 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.274349 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:14Z","lastTransitionTime":"2026-02-18T11:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.376723 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.376767 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.376779 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.376793 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.376803 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:14Z","lastTransitionTime":"2026-02-18T11:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.427521 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/1.log" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.429736 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.430391 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.445386 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.464288 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:58Z\\\",\\\"message\\\":\\\"72c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 11:49:58.328737 6129 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:49:58.328806 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.479114 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.479150 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.479158 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.479171 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.479181 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:14Z","lastTransitionTime":"2026-02-18T11:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.480788 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.492518 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.506448 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.521000 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.534691 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.548995 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.570049 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.581846 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.581892 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.581902 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.581917 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.581929 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:14Z","lastTransitionTime":"2026-02-18T11:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.585622 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.598929 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.612404 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.627071 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.641629 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.654385 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.668302 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.669385 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.680522 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.685370 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.685426 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.685435 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.685455 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.685465 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:14Z","lastTransitionTime":"2026-02-18T11:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.687931 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.709912 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:58Z\\\",\\\"message\\\":\\\"72c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 11:49:58.328737 6129 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:49:58.328806 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.724696 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.739836 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.756313 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.770134 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.786253 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.787750 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.787804 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.787817 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.787833 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.787844 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:14Z","lastTransitionTime":"2026-02-18T11:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.800508 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.817414 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.834907 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.850622 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.864959 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.877322 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.890139 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.890183 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.890196 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.890213 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.890225 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:14Z","lastTransitionTime":"2026-02-18T11:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.892944 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.906363 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.919935 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:14Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.931231 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:08:11.088855244 +0000 UTC Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.993105 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.993146 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.993163 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.993180 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:14 crc kubenswrapper[4717]: I0218 11:50:14.993191 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:14Z","lastTransitionTime":"2026-02-18T11:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.035681 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:15 crc kubenswrapper[4717]: E0218 11:50:15.035823 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.095315 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.095346 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.095358 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.095374 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.095384 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:15Z","lastTransitionTime":"2026-02-18T11:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.197815 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.197851 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.197862 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.197878 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.197887 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:15Z","lastTransitionTime":"2026-02-18T11:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.300979 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.301037 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.301047 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.301070 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.301085 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:15Z","lastTransitionTime":"2026-02-18T11:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.403746 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.403800 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.403813 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.403830 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.403841 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:15Z","lastTransitionTime":"2026-02-18T11:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.434476 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/2.log" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.435372 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/1.log" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.438646 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b" exitCode=1 Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.438723 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.438774 4717 scope.go:117] "RemoveContainer" containerID="cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.441649 4717 scope.go:117] "RemoveContainer" containerID="a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b" Feb 18 11:50:15 crc kubenswrapper[4717]: E0218 11:50:15.442765 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.465566 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.483438 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.498072 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.507029 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.507097 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.507113 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.507181 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.507200 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:15Z","lastTransitionTime":"2026-02-18T11:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.512051 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.526076 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.539424 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.555616 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.571981 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.585312 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.596420 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.607994 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.609473 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.609510 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.609524 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.609565 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.609580 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:15Z","lastTransitionTime":"2026-02-18T11:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.620305 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.631998 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.647561 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.666020 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd0565cd407bcefb6fe2b7c83b1d46d36aee56efa7239401004273e40590cebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:49:58Z\\\",\\\"message\\\":\\\"72c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {eb8eef51-1a8d-43f9-ae2e-3b2cc00ded60}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 11:49:58.328737 6129 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:49:58.328806 6129 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"message\\\":\\\"\\\\nI0218 11:50:14.800637 6349 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:50:14.800662 6349 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:50:14.800648 6349 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:50:14.800685 6349 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:50:14.800699 6349 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:50:14.800713 6349 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:50:14.800725 6349 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:50:14.800758 6349 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:50:14.803437 6349 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:50:14.803578 6349 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:50:14.803608 6349 factory.go:656] Stopping watch factory\\\\nI0218 11:50:14.803627 6349 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:50:14.803676 6349 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 11:50:14.803691 6349 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:50:14.803709 6349 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:50:14.803827 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.679784 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.691859 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:15Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.711709 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.711739 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.711747 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.711761 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.711771 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:15Z","lastTransitionTime":"2026-02-18T11:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.814020 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.814083 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.814100 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.814122 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.814139 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:15Z","lastTransitionTime":"2026-02-18T11:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.917454 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.917500 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.917516 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.917539 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.917559 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:15Z","lastTransitionTime":"2026-02-18T11:50:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:15 crc kubenswrapper[4717]: I0218 11:50:15.931466 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:45:17.634427471 +0000 UTC Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.020844 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.020905 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.020920 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.020943 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.020958 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:16Z","lastTransitionTime":"2026-02-18T11:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.036501 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.036602 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:16 crc kubenswrapper[4717]: E0218 11:50:16.036763 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.036501 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:16 crc kubenswrapper[4717]: E0218 11:50:16.036623 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:16 crc kubenswrapper[4717]: E0218 11:50:16.036859 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.060418 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:16 crc kubenswrapper[4717]: E0218 11:50:16.060538 4717 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:16 crc kubenswrapper[4717]: E0218 11:50:16.060582 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs podName:a549f413-5b44-4fac-a21e-4f41cc30fbe6 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:32.06056905 +0000 UTC m=+66.462670366 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs") pod "network-metrics-daemon-gxzpl" (UID: "a549f413-5b44-4fac-a21e-4f41cc30fbe6") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.123829 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.123866 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.123878 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.123894 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.123907 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:16Z","lastTransitionTime":"2026-02-18T11:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.226934 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.226990 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.227004 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.227023 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.227041 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:16Z","lastTransitionTime":"2026-02-18T11:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.329970 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.330009 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.330017 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.330030 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.330040 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:16Z","lastTransitionTime":"2026-02-18T11:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.432399 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.432440 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.432448 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.432461 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.432470 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:16Z","lastTransitionTime":"2026-02-18T11:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.442906 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/2.log" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.445677 4717 scope.go:117] "RemoveContainer" containerID="a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b" Feb 18 11:50:16 crc kubenswrapper[4717]: E0218 11:50:16.445854 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.456973 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.468336 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.481781 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.501899 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"message\\\":\\\"\\\\nI0218 11:50:14.800637 6349 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:50:14.800662 6349 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:50:14.800648 6349 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:50:14.800685 6349 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:50:14.800699 6349 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:50:14.800713 6349 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:50:14.800725 6349 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:50:14.800758 6349 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:50:14.803437 6349 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:50:14.803578 6349 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:50:14.803608 6349 factory.go:656] Stopping watch factory\\\\nI0218 11:50:14.803627 6349 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:50:14.803676 6349 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 11:50:14.803691 6349 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:50:14.803709 6349 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:50:14.803827 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.516805 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.530437 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.534350 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.534391 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.534402 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.534418 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.534429 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:16Z","lastTransitionTime":"2026-02-18T11:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.544718 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.559330 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.569717 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.577930 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.586480 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.595427 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.606415 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.616732 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.628355 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.636112 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.636157 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.636169 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.636187 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.636197 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:16Z","lastTransitionTime":"2026-02-18T11:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.638961 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.648973 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:16Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.738624 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.738689 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.738711 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.738740 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.738763 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:16Z","lastTransitionTime":"2026-02-18T11:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.841182 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.841237 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.841248 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.841291 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.841303 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:16Z","lastTransitionTime":"2026-02-18T11:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.932374 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:31:03.546484373 +0000 UTC Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.943863 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.943922 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.943940 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.943962 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:16 crc kubenswrapper[4717]: I0218 11:50:16.943981 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:16Z","lastTransitionTime":"2026-02-18T11:50:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.036475 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:17 crc kubenswrapper[4717]: E0218 11:50:17.036656 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.045630 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.045682 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.045692 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.045704 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.045715 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:17Z","lastTransitionTime":"2026-02-18T11:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.050808 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.062718 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.073763 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.085852 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.106863 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.126210 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.140321 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.147298 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.147345 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.147359 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.147380 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.147392 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:17Z","lastTransitionTime":"2026-02-18T11:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.158351 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.178562 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.195221 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.208361 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.223136 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.234139 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.248223 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.253488 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.253528 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.253541 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.253562 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.253576 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:17Z","lastTransitionTime":"2026-02-18T11:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.267121 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.283967 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.319411 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"message\\\":\\\"\\\\nI0218 11:50:14.800637 6349 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:50:14.800662 6349 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:50:14.800648 6349 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:50:14.800685 6349 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:50:14.800699 6349 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:50:14.800713 6349 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:50:14.800725 6349 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:50:14.800758 6349 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:50:14.803437 6349 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:50:14.803578 6349 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:50:14.803608 6349 factory.go:656] Stopping watch factory\\\\nI0218 11:50:14.803627 6349 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:50:14.803676 6349 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 11:50:14.803691 6349 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:50:14.803709 6349 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:50:14.803827 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:17Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.355674 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.355715 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.355727 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.355740 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.355749 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:17Z","lastTransitionTime":"2026-02-18T11:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.457464 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.457505 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.457516 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.457531 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.457542 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:17Z","lastTransitionTime":"2026-02-18T11:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.559884 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.560151 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.560237 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.560360 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.560459 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:17Z","lastTransitionTime":"2026-02-18T11:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.663065 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.663108 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.663122 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.663140 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.663151 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:17Z","lastTransitionTime":"2026-02-18T11:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.765722 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.765782 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.765795 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.765819 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.765837 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:17Z","lastTransitionTime":"2026-02-18T11:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.868716 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.868843 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.868858 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.868877 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.868890 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:17Z","lastTransitionTime":"2026-02-18T11:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.933328 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:40:57.276612934 +0000 UTC Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.971469 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.971531 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.971542 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.971557 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:17 crc kubenswrapper[4717]: I0218 11:50:17.971567 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:17Z","lastTransitionTime":"2026-02-18T11:50:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.036160 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.036231 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.036302 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.036332 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.036415 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.036582 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.074509 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.074548 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.074560 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.074575 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.074587 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:18Z","lastTransitionTime":"2026-02-18T11:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.177357 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.177392 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.177401 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.177414 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.177423 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:18Z","lastTransitionTime":"2026-02-18T11:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.279498 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.279547 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.279563 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.279583 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.279597 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:18Z","lastTransitionTime":"2026-02-18T11:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.382283 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.382635 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.382704 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.382950 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.383024 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:18Z","lastTransitionTime":"2026-02-18T11:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.485963 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.486014 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.486025 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.486044 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.486056 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:18Z","lastTransitionTime":"2026-02-18T11:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.588490 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.588550 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.588561 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.588576 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.588588 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:18Z","lastTransitionTime":"2026-02-18T11:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.683646 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.683750 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:50:50.683727078 +0000 UTC m=+85.085828394 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.691519 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.691784 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.691925 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.692073 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.692190 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:18Z","lastTransitionTime":"2026-02-18T11:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.784835 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.784877 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.784898 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.784914 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785037 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785055 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785077 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785111 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785123 4717 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785132 4717 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785176 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:50.785159898 +0000 UTC m=+85.187261224 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785080 4717 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785230 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:50.7852199 +0000 UTC m=+85.187321206 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785249 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:50.78524193 +0000 UTC m=+85.187343246 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785460 4717 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:50:18 crc kubenswrapper[4717]: E0218 11:50:18.785534 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:50:50.785518128 +0000 UTC m=+85.187619464 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.795095 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.795375 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.795489 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.795583 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.795675 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:18Z","lastTransitionTime":"2026-02-18T11:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.898570 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.898621 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.898635 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.898654 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.898669 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:18Z","lastTransitionTime":"2026-02-18T11:50:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:18 crc kubenswrapper[4717]: I0218 11:50:18.934505 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:59:19.736890578 +0000 UTC Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.000891 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.000933 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.000947 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.000966 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.000979 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:19Z","lastTransitionTime":"2026-02-18T11:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.036533 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:19 crc kubenswrapper[4717]: E0218 11:50:19.036705 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.102553 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.102587 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.102596 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.102608 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.102617 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:19Z","lastTransitionTime":"2026-02-18T11:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.204541 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.205099 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.205423 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.205792 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.206107 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:19Z","lastTransitionTime":"2026-02-18T11:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.309499 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.309558 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.309575 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.309597 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.309616 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:19Z","lastTransitionTime":"2026-02-18T11:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.412733 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.413092 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.413238 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.413388 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.413530 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:19Z","lastTransitionTime":"2026-02-18T11:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.516642 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.517042 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.517153 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.517301 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.517399 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:19Z","lastTransitionTime":"2026-02-18T11:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.620441 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.620501 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.620522 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.620550 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.620570 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:19Z","lastTransitionTime":"2026-02-18T11:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.723022 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.723327 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.723458 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.723672 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.723774 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:19Z","lastTransitionTime":"2026-02-18T11:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.826149 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.826193 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.826204 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.826227 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.826239 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:19Z","lastTransitionTime":"2026-02-18T11:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.929204 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.929545 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.929718 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.929868 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.930067 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:19Z","lastTransitionTime":"2026-02-18T11:50:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:19 crc kubenswrapper[4717]: I0218 11:50:19.936313 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:45:24.389025044 +0000 UTC Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.033758 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.034075 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.034232 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.034510 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.034654 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:20Z","lastTransitionTime":"2026-02-18T11:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.036547 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.036556 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.036626 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:20 crc kubenswrapper[4717]: E0218 11:50:20.037007 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:20 crc kubenswrapper[4717]: E0218 11:50:20.036850 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:20 crc kubenswrapper[4717]: E0218 11:50:20.037147 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.138577 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.138973 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.139043 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.139105 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.139168 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:20Z","lastTransitionTime":"2026-02-18T11:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.242287 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.242355 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.242377 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.242402 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.242421 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:20Z","lastTransitionTime":"2026-02-18T11:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.345648 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.346052 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.346246 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.346571 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.346840 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:20Z","lastTransitionTime":"2026-02-18T11:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.449762 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.449845 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.449857 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.449876 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.449888 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:20Z","lastTransitionTime":"2026-02-18T11:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.553499 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.553574 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.553593 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.553617 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.553636 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:20Z","lastTransitionTime":"2026-02-18T11:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.656133 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.656217 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.656248 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.656276 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.656285 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:20Z","lastTransitionTime":"2026-02-18T11:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.759238 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.759339 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.759358 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.759381 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.759399 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:20Z","lastTransitionTime":"2026-02-18T11:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.861623 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.861672 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.861683 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.861698 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.861705 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:20Z","lastTransitionTime":"2026-02-18T11:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.938057 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:58:35.420440186 +0000 UTC Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.964163 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.964207 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.964220 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.964234 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:20 crc kubenswrapper[4717]: I0218 11:50:20.964244 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:20Z","lastTransitionTime":"2026-02-18T11:50:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.036202 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:21 crc kubenswrapper[4717]: E0218 11:50:21.036397 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.066385 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.066424 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.066440 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.066460 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.066475 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:21Z","lastTransitionTime":"2026-02-18T11:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.169022 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.169078 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.169091 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.169108 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.169123 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:21Z","lastTransitionTime":"2026-02-18T11:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.272287 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.272367 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.272401 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.272430 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.272452 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:21Z","lastTransitionTime":"2026-02-18T11:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.374786 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.374859 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.374875 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.374897 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.374912 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:21Z","lastTransitionTime":"2026-02-18T11:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.476641 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.476688 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.476699 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.476713 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.476725 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:21Z","lastTransitionTime":"2026-02-18T11:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.579147 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.579181 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.579189 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.579202 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.579211 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:21Z","lastTransitionTime":"2026-02-18T11:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.682103 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.682549 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.682670 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.682778 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.682888 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:21Z","lastTransitionTime":"2026-02-18T11:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.785852 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.785896 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.785906 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.785922 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.785935 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:21Z","lastTransitionTime":"2026-02-18T11:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.888300 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.888349 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.888362 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.888384 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.888397 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:21Z","lastTransitionTime":"2026-02-18T11:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.939218 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:01:29.060038478 +0000 UTC Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.991059 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.991146 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.991160 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.991178 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:21 crc kubenswrapper[4717]: I0218 11:50:21.991190 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:21Z","lastTransitionTime":"2026-02-18T11:50:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.035955 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:22 crc kubenswrapper[4717]: E0218 11:50:22.036136 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.036205 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:22 crc kubenswrapper[4717]: E0218 11:50:22.036359 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.036544 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:22 crc kubenswrapper[4717]: E0218 11:50:22.036715 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.093532 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.093587 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.093597 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.093612 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.093621 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:22Z","lastTransitionTime":"2026-02-18T11:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.195811 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.195892 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.195909 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.195928 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.195943 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:22Z","lastTransitionTime":"2026-02-18T11:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.299683 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.299744 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.299766 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.299797 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.299818 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:22Z","lastTransitionTime":"2026-02-18T11:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.403069 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.403114 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.403124 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.403141 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.403153 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:22Z","lastTransitionTime":"2026-02-18T11:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.505644 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.505702 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.505720 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.505744 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.505761 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:22Z","lastTransitionTime":"2026-02-18T11:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.608240 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.608296 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.608306 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.608322 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.608333 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:22Z","lastTransitionTime":"2026-02-18T11:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.710793 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.710832 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.710840 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.710856 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.710866 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:22Z","lastTransitionTime":"2026-02-18T11:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.813179 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.813227 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.813236 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.813250 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.813295 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:22Z","lastTransitionTime":"2026-02-18T11:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.915783 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.915830 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.915846 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.915864 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.915877 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:22Z","lastTransitionTime":"2026-02-18T11:50:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:22 crc kubenswrapper[4717]: I0218 11:50:22.939657 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:30:16.07004255 +0000 UTC Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.019637 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.019692 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.019704 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.019722 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.019735 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.036398 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:23 crc kubenswrapper[4717]: E0218 11:50:23.036599 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.122799 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.122833 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.122843 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.122859 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.122870 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.177248 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.177309 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.177319 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.177335 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.177347 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: E0218 11:50:23.190500 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:23Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.196047 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.196088 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.196099 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.196113 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.196124 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: E0218 11:50:23.209758 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:23Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.213371 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.213419 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.213431 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.213448 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.213462 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: E0218 11:50:23.230296 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:23Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.234107 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.234190 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.234205 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.234226 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.234238 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: E0218 11:50:23.245225 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:23Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.248335 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.248371 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.248381 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.248397 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.248407 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: E0218 11:50:23.259422 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:23Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:23 crc kubenswrapper[4717]: E0218 11:50:23.259600 4717 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.261079 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.261116 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.261128 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.261144 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.261154 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.363703 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.363734 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.363744 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.363759 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.363770 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.465463 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.465498 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.465523 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.465537 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.465546 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.568543 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.568597 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.568615 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.568635 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.568656 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.670866 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.670920 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.670937 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.670960 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.670977 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.773374 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.773412 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.773421 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.773433 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.773443 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.876762 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.877104 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.877198 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.877320 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.877402 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.940084 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:00:16.783634768 +0000 UTC Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.979871 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.979917 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.979931 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.979949 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:23 crc kubenswrapper[4717]: I0218 11:50:23.979963 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:23Z","lastTransitionTime":"2026-02-18T11:50:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.035467 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.035486 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:24 crc kubenswrapper[4717]: E0218 11:50:24.035644 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.035732 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:24 crc kubenswrapper[4717]: E0218 11:50:24.035733 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:24 crc kubenswrapper[4717]: E0218 11:50:24.035890 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.081822 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.082095 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.082183 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.082312 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.082420 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:24Z","lastTransitionTime":"2026-02-18T11:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.184713 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.184768 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.184777 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.184791 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.184801 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:24Z","lastTransitionTime":"2026-02-18T11:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.288132 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.288204 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.288224 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.288248 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.288318 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:24Z","lastTransitionTime":"2026-02-18T11:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.390878 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.390954 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.390967 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.390981 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.390990 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:24Z","lastTransitionTime":"2026-02-18T11:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.493604 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.493668 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.493682 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.493708 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.493722 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:24Z","lastTransitionTime":"2026-02-18T11:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.596813 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.596857 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.596889 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.596904 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.596913 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:24Z","lastTransitionTime":"2026-02-18T11:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.699691 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.699739 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.699752 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.699770 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.699783 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:24Z","lastTransitionTime":"2026-02-18T11:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.802402 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.802446 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.802458 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.802476 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.802491 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:24Z","lastTransitionTime":"2026-02-18T11:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.905607 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.905645 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.905654 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.905669 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.905678 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:24Z","lastTransitionTime":"2026-02-18T11:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:24 crc kubenswrapper[4717]: I0218 11:50:24.940307 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:29:57.09267003 +0000 UTC Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.007850 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.007883 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.007891 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.007904 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.007912 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:25Z","lastTransitionTime":"2026-02-18T11:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.036494 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:25 crc kubenswrapper[4717]: E0218 11:50:25.036641 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.111559 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.111647 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.111659 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.111679 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.111698 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:25Z","lastTransitionTime":"2026-02-18T11:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.215422 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.215478 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.215488 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.215505 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.215514 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:25Z","lastTransitionTime":"2026-02-18T11:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.319458 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.319555 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.319581 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.319617 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.319641 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:25Z","lastTransitionTime":"2026-02-18T11:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.427015 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.427625 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.427702 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.427794 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.427877 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:25Z","lastTransitionTime":"2026-02-18T11:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.531686 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.531721 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.531730 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.531749 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.531760 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:25Z","lastTransitionTime":"2026-02-18T11:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.635290 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.635381 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.635405 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.635438 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.635457 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:25Z","lastTransitionTime":"2026-02-18T11:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.739042 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.739109 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.739124 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.739149 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.739167 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:25Z","lastTransitionTime":"2026-02-18T11:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.842532 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.842565 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.842573 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.842587 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.842597 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:25Z","lastTransitionTime":"2026-02-18T11:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.941053 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:00:16.355940368 +0000 UTC Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.945951 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.946012 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.946026 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.946048 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:25 crc kubenswrapper[4717]: I0218 11:50:25.946061 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:25Z","lastTransitionTime":"2026-02-18T11:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.035789 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:26 crc kubenswrapper[4717]: E0218 11:50:26.036012 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.036023 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:26 crc kubenswrapper[4717]: E0218 11:50:26.036802 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.036993 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:26 crc kubenswrapper[4717]: E0218 11:50:26.037312 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.048479 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.048514 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.048523 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.048547 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.048556 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:26Z","lastTransitionTime":"2026-02-18T11:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.151166 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.151199 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.151226 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.151239 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.151248 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:26Z","lastTransitionTime":"2026-02-18T11:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.254083 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.254143 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.254152 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.254166 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.254175 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:26Z","lastTransitionTime":"2026-02-18T11:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.357338 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.357392 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.357401 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.357422 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.357444 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:26Z","lastTransitionTime":"2026-02-18T11:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.461002 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.461095 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.461119 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.461152 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.461179 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:26Z","lastTransitionTime":"2026-02-18T11:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.565393 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.565479 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.565496 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.565537 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.565554 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:26Z","lastTransitionTime":"2026-02-18T11:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.668751 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.668815 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.668825 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.668840 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.668850 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:26Z","lastTransitionTime":"2026-02-18T11:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.771843 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.771884 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.771893 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.771907 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.771917 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:26Z","lastTransitionTime":"2026-02-18T11:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.874729 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.874765 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.874773 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.874786 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.874797 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:26Z","lastTransitionTime":"2026-02-18T11:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.941898 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:50:44.487180213 +0000 UTC Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.976885 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.976921 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.976932 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.976947 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:26 crc kubenswrapper[4717]: I0218 11:50:26.976959 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:26Z","lastTransitionTime":"2026-02-18T11:50:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.036017 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:27 crc kubenswrapper[4717]: E0218 11:50:27.036303 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.059249 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.088022 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.088075 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.088095 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.088122 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.088142 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:27Z","lastTransitionTime":"2026-02-18T11:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.097223 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.121714 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.136105 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.152744 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.166714 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.188581 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.190121 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.190211 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.190223 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.190252 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.190283 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:27Z","lastTransitionTime":"2026-02-18T11:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.200012 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.210026 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.223140 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.241361 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"message\\\":\\\"\\\\nI0218 11:50:14.800637 6349 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:50:14.800662 6349 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:50:14.800648 6349 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:50:14.800685 6349 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:50:14.800699 6349 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:50:14.800713 6349 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:50:14.800725 6349 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:50:14.800758 6349 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:50:14.803437 6349 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:50:14.803578 6349 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:50:14.803608 6349 factory.go:656] Stopping watch factory\\\\nI0218 11:50:14.803627 6349 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:50:14.803676 6349 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 11:50:14.803691 6349 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:50:14.803709 6349 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:50:14.803827 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.252217 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.262747 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.276398 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.289741 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.292204 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.292394 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.292486 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.292585 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.292669 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:27Z","lastTransitionTime":"2026-02-18T11:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.300053 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.310644 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:27Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.394782 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.395102 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.395123 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.395142 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.395151 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:27Z","lastTransitionTime":"2026-02-18T11:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.498116 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.498164 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.498175 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.498190 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.498202 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:27Z","lastTransitionTime":"2026-02-18T11:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.600972 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.601032 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.601046 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.601065 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.601080 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:27Z","lastTransitionTime":"2026-02-18T11:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.702741 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.702772 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.702780 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.702792 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.702800 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:27Z","lastTransitionTime":"2026-02-18T11:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.804959 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.805002 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.805013 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.805031 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.805042 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:27Z","lastTransitionTime":"2026-02-18T11:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.907770 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.907879 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.907896 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.907913 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.907925 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:27Z","lastTransitionTime":"2026-02-18T11:50:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:27 crc kubenswrapper[4717]: I0218 11:50:27.942295 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:07:40.327354696 +0000 UTC Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.010885 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.010929 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.010940 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.010957 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.010971 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:28Z","lastTransitionTime":"2026-02-18T11:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.036151 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:28 crc kubenswrapper[4717]: E0218 11:50:28.036408 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.036242 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:28 crc kubenswrapper[4717]: E0218 11:50:28.036634 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.036242 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:28 crc kubenswrapper[4717]: E0218 11:50:28.037142 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.114797 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.114869 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.114884 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.114907 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.114921 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:28Z","lastTransitionTime":"2026-02-18T11:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.217560 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.217622 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.217636 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.217660 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.217673 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:28Z","lastTransitionTime":"2026-02-18T11:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.321372 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.321467 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.321483 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.321502 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.321514 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:28Z","lastTransitionTime":"2026-02-18T11:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.424831 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.424869 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.424908 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.424926 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.424938 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:28Z","lastTransitionTime":"2026-02-18T11:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.528164 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.528238 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.528251 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.528306 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.528323 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:28Z","lastTransitionTime":"2026-02-18T11:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.630945 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.631009 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.631022 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.631047 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.631063 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:28Z","lastTransitionTime":"2026-02-18T11:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.734521 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.734577 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.734591 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.734615 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.734642 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:28Z","lastTransitionTime":"2026-02-18T11:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.838062 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.838144 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.838158 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.838183 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.838198 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:28Z","lastTransitionTime":"2026-02-18T11:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.942026 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.942107 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.942119 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.942140 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.942155 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:28Z","lastTransitionTime":"2026-02-18T11:50:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:28 crc kubenswrapper[4717]: I0218 11:50:28.942479 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:41:40.164183687 +0000 UTC Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.035889 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:29 crc kubenswrapper[4717]: E0218 11:50:29.036139 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.044413 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.044479 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.044495 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.044520 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.044534 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:29Z","lastTransitionTime":"2026-02-18T11:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.148166 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.148246 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.148303 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.148342 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.148361 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:29Z","lastTransitionTime":"2026-02-18T11:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.251025 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.251093 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.251106 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.251130 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.251146 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:29Z","lastTransitionTime":"2026-02-18T11:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.354379 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.354445 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.354457 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.354478 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.354490 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:29Z","lastTransitionTime":"2026-02-18T11:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.457723 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.458355 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.458695 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.459055 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.459583 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:29Z","lastTransitionTime":"2026-02-18T11:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.562383 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.562420 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.562428 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.562440 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.562449 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:29Z","lastTransitionTime":"2026-02-18T11:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.665183 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.665230 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.665242 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.665299 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.665315 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:29Z","lastTransitionTime":"2026-02-18T11:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.769025 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.769117 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.769132 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.769154 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.769169 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:29Z","lastTransitionTime":"2026-02-18T11:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.872112 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.872184 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.872195 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.872211 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.872220 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:29Z","lastTransitionTime":"2026-02-18T11:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.943647 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:51:06.440980489 +0000 UTC Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.975012 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.975064 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.975081 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.975100 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:29 crc kubenswrapper[4717]: I0218 11:50:29.975110 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:29Z","lastTransitionTime":"2026-02-18T11:50:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.035745 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.035830 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:30 crc kubenswrapper[4717]: E0218 11:50:30.035900 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.035982 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:30 crc kubenswrapper[4717]: E0218 11:50:30.036096 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:30 crc kubenswrapper[4717]: E0218 11:50:30.036250 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.077677 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.077710 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.077721 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.077737 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.077752 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:30Z","lastTransitionTime":"2026-02-18T11:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.181114 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.181790 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.181828 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.181847 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.181856 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:30Z","lastTransitionTime":"2026-02-18T11:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.284240 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.284290 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.284298 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.284314 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.284322 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:30Z","lastTransitionTime":"2026-02-18T11:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.387418 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.387445 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.387457 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.387473 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.387486 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:30Z","lastTransitionTime":"2026-02-18T11:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.489882 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.489912 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.489921 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.489934 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.489943 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:30Z","lastTransitionTime":"2026-02-18T11:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.592316 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.592346 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.592354 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.592367 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.592375 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:30Z","lastTransitionTime":"2026-02-18T11:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.695586 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.696049 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.696214 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.696435 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.696611 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:30Z","lastTransitionTime":"2026-02-18T11:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.802401 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.803078 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.803441 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.803784 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.804139 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:30Z","lastTransitionTime":"2026-02-18T11:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.911587 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.911681 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.911715 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.911742 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.911761 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:30Z","lastTransitionTime":"2026-02-18T11:50:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:30 crc kubenswrapper[4717]: I0218 11:50:30.943800 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:39:16.657730375 +0000 UTC Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.014197 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.014250 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.014294 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.014318 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.014335 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:31Z","lastTransitionTime":"2026-02-18T11:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.036556 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:31 crc kubenswrapper[4717]: E0218 11:50:31.036697 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.118166 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.118232 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.118294 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.118330 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.118351 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:31Z","lastTransitionTime":"2026-02-18T11:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.220520 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.220567 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.220579 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.220598 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.220613 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:31Z","lastTransitionTime":"2026-02-18T11:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.322801 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.322833 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.322843 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.322861 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.322872 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:31Z","lastTransitionTime":"2026-02-18T11:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.425043 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.425080 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.425093 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.425112 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.425123 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:31Z","lastTransitionTime":"2026-02-18T11:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.527998 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.528032 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.528042 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.528057 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.528068 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:31Z","lastTransitionTime":"2026-02-18T11:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.629799 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.630098 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.630216 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.630334 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.630407 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:31Z","lastTransitionTime":"2026-02-18T11:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.733966 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.734007 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.734019 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.734034 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.734043 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:31Z","lastTransitionTime":"2026-02-18T11:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.836771 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.837155 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.837248 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.837356 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.837424 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:31Z","lastTransitionTime":"2026-02-18T11:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.938981 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.939008 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.939016 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.939036 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.939048 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:31Z","lastTransitionTime":"2026-02-18T11:50:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:31 crc kubenswrapper[4717]: I0218 11:50:31.944418 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:42:02.689119827 +0000 UTC Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.036148 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:32 crc kubenswrapper[4717]: E0218 11:50:32.036276 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.036389 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:32 crc kubenswrapper[4717]: E0218 11:50:32.036457 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.037056 4717 scope.go:117] "RemoveContainer" containerID="a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.037165 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:32 crc kubenswrapper[4717]: E0218 11:50:32.037282 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" Feb 18 11:50:32 crc kubenswrapper[4717]: E0218 11:50:32.037486 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.041014 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.041047 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.041056 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.041068 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.041077 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:32Z","lastTransitionTime":"2026-02-18T11:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.119725 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:32 crc kubenswrapper[4717]: E0218 11:50:32.119916 4717 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:32 crc kubenswrapper[4717]: E0218 11:50:32.120005 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs podName:a549f413-5b44-4fac-a21e-4f41cc30fbe6 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:04.119987543 +0000 UTC m=+98.522088859 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs") pod "network-metrics-daemon-gxzpl" (UID: "a549f413-5b44-4fac-a21e-4f41cc30fbe6") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.143128 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.143179 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.143190 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.143207 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.143219 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:32Z","lastTransitionTime":"2026-02-18T11:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.247010 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.247282 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.247359 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.247425 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.247507 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:32Z","lastTransitionTime":"2026-02-18T11:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.350112 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.350163 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.350175 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.350193 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.350203 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:32Z","lastTransitionTime":"2026-02-18T11:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.452421 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.452477 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.452491 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.452519 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.452534 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:32Z","lastTransitionTime":"2026-02-18T11:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.555844 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.556173 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.556411 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.556512 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.556578 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:32Z","lastTransitionTime":"2026-02-18T11:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.659849 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.659905 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.659918 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.659941 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.659955 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:32Z","lastTransitionTime":"2026-02-18T11:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.762773 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.762828 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.762837 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.762855 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.762871 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:32Z","lastTransitionTime":"2026-02-18T11:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.865092 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.865134 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.865144 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.865159 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.865169 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:32Z","lastTransitionTime":"2026-02-18T11:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.945297 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:56:23.672810705 +0000 UTC Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.967132 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.967187 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.967201 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.967232 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:32 crc kubenswrapper[4717]: I0218 11:50:32.967243 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:32Z","lastTransitionTime":"2026-02-18T11:50:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.035905 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:33 crc kubenswrapper[4717]: E0218 11:50:33.036068 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.069721 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.069754 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.069764 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.069781 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.069791 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.172303 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.172346 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.172359 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.172376 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.172389 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.275644 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.275699 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.275721 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.275745 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.275763 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.338999 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.339037 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.339046 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.339060 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.339069 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: E0218 11:50:33.354848 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:33Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.358754 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.358791 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.358807 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.358827 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.358840 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: E0218 11:50:33.374337 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:33Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.379435 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.379488 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.379505 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.379529 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.379549 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: E0218 11:50:33.394599 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:33Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.400402 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.400450 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.400462 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.400485 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.400498 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: E0218 11:50:33.413662 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:33Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.417816 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.417857 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.417866 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.417882 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.417895 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: E0218 11:50:33.432116 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:33Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:33 crc kubenswrapper[4717]: E0218 11:50:33.432250 4717 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.434641 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.434697 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.434710 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.434729 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.434742 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.537418 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.537451 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.537459 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.537472 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.537481 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.639633 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.639694 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.639704 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.639719 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.639728 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.741484 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.741548 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.741559 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.741573 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.741582 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.843687 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.843719 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.843727 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.843739 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.843756 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.945569 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:21:09.978612879 +0000 UTC Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.946043 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.946078 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.946089 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.946107 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:33 crc kubenswrapper[4717]: I0218 11:50:33.946117 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:33Z","lastTransitionTime":"2026-02-18T11:50:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.035909 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:34 crc kubenswrapper[4717]: E0218 11:50:34.036047 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.036150 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.036175 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:34 crc kubenswrapper[4717]: E0218 11:50:34.036304 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:34 crc kubenswrapper[4717]: E0218 11:50:34.036384 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.047926 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.047987 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.048000 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.048014 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.048024 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:34Z","lastTransitionTime":"2026-02-18T11:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.150277 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.150318 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.150329 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.150345 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.150357 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:34Z","lastTransitionTime":"2026-02-18T11:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.252925 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.252961 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.252970 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.252983 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.252994 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:34Z","lastTransitionTime":"2026-02-18T11:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.355729 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.355769 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.355778 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.355794 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.355802 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:34Z","lastTransitionTime":"2026-02-18T11:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.457934 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.457980 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.457995 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.458014 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.458029 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:34Z","lastTransitionTime":"2026-02-18T11:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.560436 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.560471 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.560479 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.560490 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.560500 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:34Z","lastTransitionTime":"2026-02-18T11:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.663595 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.663633 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.663648 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.663673 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.663689 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:34Z","lastTransitionTime":"2026-02-18T11:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.765859 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.765901 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.765912 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.765926 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.765938 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:34Z","lastTransitionTime":"2026-02-18T11:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.867799 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.867838 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.867851 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.867867 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.867878 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:34Z","lastTransitionTime":"2026-02-18T11:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.945922 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:09:26.71638324 +0000 UTC Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.969687 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.969718 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.969726 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.969738 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:34 crc kubenswrapper[4717]: I0218 11:50:34.969747 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:34Z","lastTransitionTime":"2026-02-18T11:50:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.035913 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:35 crc kubenswrapper[4717]: E0218 11:50:35.036107 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.071891 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.071927 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.071938 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.071952 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.071964 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:35Z","lastTransitionTime":"2026-02-18T11:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.174521 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.174565 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.174575 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.174589 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.174600 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:35Z","lastTransitionTime":"2026-02-18T11:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.277212 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.277297 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.277317 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.277342 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.277360 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:35Z","lastTransitionTime":"2026-02-18T11:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.379778 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.379826 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.379841 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.379860 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.379877 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:35Z","lastTransitionTime":"2026-02-18T11:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.482220 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.482300 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.482309 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.482322 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.482332 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:35Z","lastTransitionTime":"2026-02-18T11:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.505763 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hvktx_41f72a5f-4820-4dc2-a6c5-243550881aaf/kube-multus/0.log" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.505803 4717 generic.go:334] "Generic (PLEG): container finished" podID="41f72a5f-4820-4dc2-a6c5-243550881aaf" containerID="74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228" exitCode=1 Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.505830 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hvktx" event={"ID":"41f72a5f-4820-4dc2-a6c5-243550881aaf","Type":"ContainerDied","Data":"74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.506226 4717 scope.go:117] "RemoveContainer" containerID="74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.521748 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.533809 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.546803 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.557246 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.569411 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.579295 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.584938 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.584968 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.584978 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.584991 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.585000 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:35Z","lastTransitionTime":"2026-02-18T11:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.589357 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.601018 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.613593 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.625616 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.637209 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.649777 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.666870 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"message\\\":\\\"\\\\nI0218 11:50:14.800637 6349 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:50:14.800662 6349 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:50:14.800648 6349 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:50:14.800685 6349 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:50:14.800699 6349 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:50:14.800713 6349 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:50:14.800725 6349 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:50:14.800758 6349 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:50:14.803437 6349 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:50:14.803578 6349 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:50:14.803608 6349 factory.go:656] Stopping watch factory\\\\nI0218 11:50:14.803627 6349 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:50:14.803676 6349 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 11:50:14.803691 6349 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:50:14.803709 6349 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:50:14.803827 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.677480 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.686945 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.686973 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.686982 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.686995 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.687004 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:35Z","lastTransitionTime":"2026-02-18T11:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.687874 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.697791 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.708931 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:34Z\\\",\\\"message\\\":\\\"2026-02-18T11:49:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661\\\\n2026-02-18T11:49:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661 to /host/opt/cni/bin/\\\\n2026-02-18T11:49:49Z [verbose] multus-daemon started\\\\n2026-02-18T11:49:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:50:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:35Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.788960 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.789249 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.789339 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.789401 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.789458 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:35Z","lastTransitionTime":"2026-02-18T11:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.891721 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.891849 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.891924 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.891995 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.892057 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:35Z","lastTransitionTime":"2026-02-18T11:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.946021 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:58:20.025203462 +0000 UTC Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.994754 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.994791 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.994804 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.994818 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:35 crc kubenswrapper[4717]: I0218 11:50:35.994828 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:35Z","lastTransitionTime":"2026-02-18T11:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.036305 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.036674 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.036554 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:36 crc kubenswrapper[4717]: E0218 11:50:36.036886 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:36 crc kubenswrapper[4717]: E0218 11:50:36.037115 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:36 crc kubenswrapper[4717]: E0218 11:50:36.037274 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.097686 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.098008 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.098074 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.098137 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.098203 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:36Z","lastTransitionTime":"2026-02-18T11:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.200440 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.200483 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.200494 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.200513 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.200523 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:36Z","lastTransitionTime":"2026-02-18T11:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.303342 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.303380 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.303390 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.303404 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.303414 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:36Z","lastTransitionTime":"2026-02-18T11:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.405658 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.405695 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.405704 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.405718 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.405727 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:36Z","lastTransitionTime":"2026-02-18T11:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.507542 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.507575 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.507584 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.507597 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.507606 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:36Z","lastTransitionTime":"2026-02-18T11:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.510154 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hvktx_41f72a5f-4820-4dc2-a6c5-243550881aaf/kube-multus/0.log" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.510206 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hvktx" event={"ID":"41f72a5f-4820-4dc2-a6c5-243550881aaf","Type":"ContainerStarted","Data":"bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.523030 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.534905 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.545946 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.557940 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.576237 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.593986 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"message\\\":\\\"\\\\nI0218 11:50:14.800637 6349 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:50:14.800662 6349 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:50:14.800648 6349 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:50:14.800685 6349 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:50:14.800699 6349 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:50:14.800713 6349 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:50:14.800725 6349 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:50:14.800758 6349 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:50:14.803437 6349 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:50:14.803578 6349 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:50:14.803608 6349 factory.go:656] Stopping watch factory\\\\nI0218 11:50:14.803627 6349 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:50:14.803676 6349 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 11:50:14.803691 6349 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:50:14.803709 6349 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:50:14.803827 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.608758 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.610009 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.610070 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.610094 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.610119 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.610137 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:36Z","lastTransitionTime":"2026-02-18T11:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.619849 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.633013 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.647863 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:34Z\\\",\\\"message\\\":\\\"2026-02-18T11:49:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661\\\\n2026-02-18T11:49:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661 to /host/opt/cni/bin/\\\\n2026-02-18T11:49:49Z [verbose] multus-daemon started\\\\n2026-02-18T11:49:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:50:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.659475 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.671234 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.683165 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.693550 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.704073 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.714659 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.714698 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.714708 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.714725 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.714737 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:36Z","lastTransitionTime":"2026-02-18T11:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.717664 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.728525 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:36Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.816816 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.816859 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.816869 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.816883 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.816891 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:36Z","lastTransitionTime":"2026-02-18T11:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.919027 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.919058 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.919067 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.919080 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.919090 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:36Z","lastTransitionTime":"2026-02-18T11:50:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:36 crc kubenswrapper[4717]: I0218 11:50:36.946639 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:59:28.773196325 +0000 UTC Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.021737 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.021780 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.021793 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.021809 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.021824 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:37Z","lastTransitionTime":"2026-02-18T11:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.036974 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:37 crc kubenswrapper[4717]: E0218 11:50:37.037130 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.048162 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.049058 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.061811 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:34Z\\\",\\\"message\\\":\\\"2026-02-18T11:49:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661\\\\n2026-02-18T11:49:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661 to /host/opt/cni/bin/\\\\n2026-02-18T11:49:49Z [verbose] multus-daemon started\\\\n2026-02-18T11:49:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:50:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.073839 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.083427 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.094764 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.105647 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.116751 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.124460 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.124506 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.124517 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.124533 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.124545 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:37Z","lastTransitionTime":"2026-02-18T11:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.128501 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.139788 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.154687 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.168503 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.178091 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.189291 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.204652 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.222332 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"message\\\":\\\"\\\\nI0218 11:50:14.800637 6349 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:50:14.800662 6349 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:50:14.800648 6349 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:50:14.800685 6349 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:50:14.800699 6349 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:50:14.800713 6349 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:50:14.800725 6349 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:50:14.800758 6349 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:50:14.803437 6349 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:50:14.803578 6349 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:50:14.803608 6349 factory.go:656] Stopping watch factory\\\\nI0218 11:50:14.803627 6349 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:50:14.803676 6349 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 11:50:14.803691 6349 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:50:14.803709 6349 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:50:14.803827 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.226150 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.226363 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.226445 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.226567 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.226632 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:37Z","lastTransitionTime":"2026-02-18T11:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.235841 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.247061 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:37Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.329064 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.329118 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.329128 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.329144 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.329155 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:37Z","lastTransitionTime":"2026-02-18T11:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.431339 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.431396 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.431406 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.431426 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.431439 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:37Z","lastTransitionTime":"2026-02-18T11:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.534064 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.534388 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.534466 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.534568 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.534647 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:37Z","lastTransitionTime":"2026-02-18T11:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.637073 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.637112 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.637122 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.637138 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.637149 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:37Z","lastTransitionTime":"2026-02-18T11:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.739774 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.739822 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.739834 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.739849 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.739861 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:37Z","lastTransitionTime":"2026-02-18T11:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.842762 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.842799 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.842810 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.842845 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.842855 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:37Z","lastTransitionTime":"2026-02-18T11:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.945003 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.945044 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.945053 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.945067 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.945079 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:37Z","lastTransitionTime":"2026-02-18T11:50:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:37 crc kubenswrapper[4717]: I0218 11:50:37.947154 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:51:53.364203452 +0000 UTC Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.035448 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:38 crc kubenswrapper[4717]: E0218 11:50:38.036119 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.035522 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:38 crc kubenswrapper[4717]: E0218 11:50:38.036406 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.035477 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:38 crc kubenswrapper[4717]: E0218 11:50:38.036649 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.047539 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.047660 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.047752 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.048038 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.048238 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:38Z","lastTransitionTime":"2026-02-18T11:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.150800 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.150847 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.150859 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.150879 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.150891 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:38Z","lastTransitionTime":"2026-02-18T11:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.253652 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.253963 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.254089 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.254190 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.254292 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:38Z","lastTransitionTime":"2026-02-18T11:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.356587 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.356640 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.356651 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.356670 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.356681 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:38Z","lastTransitionTime":"2026-02-18T11:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.459280 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.459609 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.459716 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.459923 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.460387 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:38Z","lastTransitionTime":"2026-02-18T11:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.563633 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.563888 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.563980 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.564090 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.564157 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:38Z","lastTransitionTime":"2026-02-18T11:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.666951 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.667244 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.667359 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.667444 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.667533 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:38Z","lastTransitionTime":"2026-02-18T11:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.770625 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.770681 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.770695 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.770713 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.770725 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:38Z","lastTransitionTime":"2026-02-18T11:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.873243 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.873328 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.873344 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.873364 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.873375 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:38Z","lastTransitionTime":"2026-02-18T11:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.947344 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:24:36.077880679 +0000 UTC Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.975364 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.975650 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.975716 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.975775 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:38 crc kubenswrapper[4717]: I0218 11:50:38.975833 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:38Z","lastTransitionTime":"2026-02-18T11:50:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.036504 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:39 crc kubenswrapper[4717]: E0218 11:50:39.036638 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.079088 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.079141 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.079153 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.079174 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.079185 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:39Z","lastTransitionTime":"2026-02-18T11:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.181988 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.182338 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.182419 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.182526 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.182612 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:39Z","lastTransitionTime":"2026-02-18T11:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.285539 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.285609 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.285622 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.285644 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.285656 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:39Z","lastTransitionTime":"2026-02-18T11:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.388047 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.388320 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.388433 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.388518 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.388590 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:39Z","lastTransitionTime":"2026-02-18T11:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.492666 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.492723 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.492738 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.492756 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.492769 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:39Z","lastTransitionTime":"2026-02-18T11:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.594792 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.594835 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.594845 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.594864 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.594882 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:39Z","lastTransitionTime":"2026-02-18T11:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.696663 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.696698 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.696708 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.696724 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.696736 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:39Z","lastTransitionTime":"2026-02-18T11:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.799559 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.799617 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.799637 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.799654 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.799669 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:39Z","lastTransitionTime":"2026-02-18T11:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.902420 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.902471 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.902483 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.902501 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.902512 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:39Z","lastTransitionTime":"2026-02-18T11:50:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:39 crc kubenswrapper[4717]: I0218 11:50:39.948272 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:50:54.804661883 +0000 UTC Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.004477 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.004522 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.004531 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.004544 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.004553 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:40Z","lastTransitionTime":"2026-02-18T11:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.035777 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.035832 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.035798 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:40 crc kubenswrapper[4717]: E0218 11:50:40.035901 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:40 crc kubenswrapper[4717]: E0218 11:50:40.035983 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:40 crc kubenswrapper[4717]: E0218 11:50:40.036057 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.106565 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.106603 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.106614 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.106633 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.106645 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:40Z","lastTransitionTime":"2026-02-18T11:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.209191 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.209242 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.209274 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.209296 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.209307 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:40Z","lastTransitionTime":"2026-02-18T11:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.312089 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.312127 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.312137 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.312152 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.312163 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:40Z","lastTransitionTime":"2026-02-18T11:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.415147 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.415208 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.415221 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.415238 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.415249 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:40Z","lastTransitionTime":"2026-02-18T11:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.518044 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.518069 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.518078 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.518089 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.518098 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:40Z","lastTransitionTime":"2026-02-18T11:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.620017 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.620053 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.620081 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.620094 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.620102 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:40Z","lastTransitionTime":"2026-02-18T11:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.722052 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.722106 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.722123 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.722144 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.722160 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:40Z","lastTransitionTime":"2026-02-18T11:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.824464 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.824504 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.824512 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.824542 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.824551 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:40Z","lastTransitionTime":"2026-02-18T11:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.926814 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.926897 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.926930 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.926949 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.926960 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:40Z","lastTransitionTime":"2026-02-18T11:50:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:40 crc kubenswrapper[4717]: I0218 11:50:40.949196 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:03:55.908548699 +0000 UTC Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.029288 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.029355 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.029378 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.029406 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.029427 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:41Z","lastTransitionTime":"2026-02-18T11:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.035700 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:41 crc kubenswrapper[4717]: E0218 11:50:41.035860 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.131664 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.131709 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.131718 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.131733 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.131742 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:41Z","lastTransitionTime":"2026-02-18T11:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.234386 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.234440 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.234452 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.234470 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.234485 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:41Z","lastTransitionTime":"2026-02-18T11:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.337031 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.337080 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.337090 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.337104 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.337114 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:41Z","lastTransitionTime":"2026-02-18T11:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.440430 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.440476 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.440488 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.440503 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.440515 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:41Z","lastTransitionTime":"2026-02-18T11:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.543454 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.543495 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.543506 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.543524 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.543534 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:41Z","lastTransitionTime":"2026-02-18T11:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.646073 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.646364 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.646587 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.646677 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.646762 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:41Z","lastTransitionTime":"2026-02-18T11:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.751413 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.751464 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.751832 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.751944 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.752024 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:41Z","lastTransitionTime":"2026-02-18T11:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.854387 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.854427 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.854437 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.854451 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.854461 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:41Z","lastTransitionTime":"2026-02-18T11:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.949795 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:45:10.429863451 +0000 UTC Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.956763 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.956805 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.956817 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.956831 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:41 crc kubenswrapper[4717]: I0218 11:50:41.956840 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:41Z","lastTransitionTime":"2026-02-18T11:50:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.035710 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:42 crc kubenswrapper[4717]: E0218 11:50:42.036064 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.036184 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:42 crc kubenswrapper[4717]: E0218 11:50:42.036320 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.036437 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:42 crc kubenswrapper[4717]: E0218 11:50:42.036545 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.059140 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.059177 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.059187 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.059202 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.059212 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:42Z","lastTransitionTime":"2026-02-18T11:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.161853 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.161919 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.161937 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.161960 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.161973 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:42Z","lastTransitionTime":"2026-02-18T11:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.264601 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.264883 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.265054 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.265192 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.265299 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:42Z","lastTransitionTime":"2026-02-18T11:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.368225 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.368519 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.368632 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.368725 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.368821 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:42Z","lastTransitionTime":"2026-02-18T11:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.471409 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.471704 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.471801 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.471898 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.472034 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:42Z","lastTransitionTime":"2026-02-18T11:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.574832 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.574866 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.574875 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.574887 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.574896 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:42Z","lastTransitionTime":"2026-02-18T11:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.677794 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.677850 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.677861 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.677875 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.677884 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:42Z","lastTransitionTime":"2026-02-18T11:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.781003 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.781092 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.781109 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.781124 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.781152 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:42Z","lastTransitionTime":"2026-02-18T11:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.884054 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.884091 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.884100 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.884115 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.884125 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:42Z","lastTransitionTime":"2026-02-18T11:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.950823 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:59:47.61987951 +0000 UTC Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.986725 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.986786 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.986798 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.986815 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:42 crc kubenswrapper[4717]: I0218 11:50:42.986829 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:42Z","lastTransitionTime":"2026-02-18T11:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.036242 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:43 crc kubenswrapper[4717]: E0218 11:50:43.036379 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.036932 4717 scope.go:117] "RemoveContainer" containerID="a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.089335 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.089404 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.089418 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.089459 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.089478 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.191560 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.191589 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.191599 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.191613 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.191624 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.293605 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.293642 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.293654 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.293669 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.293690 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.395344 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.395380 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.395391 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.395407 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.395418 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.485352 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.485404 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.485414 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.485430 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.485442 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: E0218 11:50:43.496741 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.500571 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.500608 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.500617 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.500630 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.500639 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: E0218 11:50:43.512090 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.515160 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.515200 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.515212 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.515227 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.515239 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: E0218 11:50:43.525866 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.532827 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.532891 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.532901 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.532915 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.532925 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.534209 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/2.log" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.536637 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.537044 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:50:43 crc kubenswrapper[4717]: E0218 11:50:43.545376 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.548086 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.548114 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.548141 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.548155 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.548166 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.557560 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: E0218 11:50:43.569500 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: E0218 11:50:43.569615 4717 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.571187 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.571238 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.571248 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.571283 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.571295 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.576926 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.586622 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.597872 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.608518 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.619562 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"327579cb-11be-41d3-b3f0-d06e22ace6d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a14a2b95d0d214a2ddca91de7f1d1738e1b775b9c1d2372d683d77a91eb4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.636867 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.651175 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.662873 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.673658 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.673703 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.673715 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.673730 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.673743 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.675785 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.690979 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.708109 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"message\\\":\\\"\\\\nI0218 11:50:14.800637 6349 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:50:14.800662 6349 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:50:14.800648 6349 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:50:14.800685 6349 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:50:14.800699 6349 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:50:14.800713 6349 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:50:14.800725 6349 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:50:14.800758 6349 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:50:14.803437 6349 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:50:14.803578 6349 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:50:14.803608 6349 factory.go:656] Stopping watch factory\\\\nI0218 11:50:14.803627 6349 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:50:14.803676 6349 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 11:50:14.803691 6349 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:50:14.803709 6349 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:50:14.803827 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.719611 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.732303 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.741978 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.753695 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:34Z\\\",\\\"message\\\":\\\"2026-02-18T11:49:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661\\\\n2026-02-18T11:49:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661 to /host/opt/cni/bin/\\\\n2026-02-18T11:49:49Z [verbose] multus-daemon started\\\\n2026-02-18T11:49:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:50:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.764657 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.774334 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.776160 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.776188 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.776199 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.776216 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.776226 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.878927 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.878971 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.878982 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.878997 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.879011 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.951450 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:16:32.276855795 +0000 UTC Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.981516 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.981739 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.982014 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.982188 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:43 crc kubenswrapper[4717]: I0218 11:50:43.982255 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:43Z","lastTransitionTime":"2026-02-18T11:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.035899 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.036154 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.036199 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:44 crc kubenswrapper[4717]: E0218 11:50:44.036508 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:44 crc kubenswrapper[4717]: E0218 11:50:44.036874 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:44 crc kubenswrapper[4717]: E0218 11:50:44.036869 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.084895 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.084936 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.084948 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.084967 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.084979 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:44Z","lastTransitionTime":"2026-02-18T11:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.187742 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.187796 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.187806 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.187820 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.187830 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:44Z","lastTransitionTime":"2026-02-18T11:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.289987 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.290014 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.290024 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.290036 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.290045 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:44Z","lastTransitionTime":"2026-02-18T11:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.392170 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.392236 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.392248 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.392283 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.392295 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:44Z","lastTransitionTime":"2026-02-18T11:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.494211 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.494247 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.494284 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.494310 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.494322 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:44Z","lastTransitionTime":"2026-02-18T11:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.541715 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/3.log" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.542634 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/2.log" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.553327 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" exitCode=1 Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.553385 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.553429 4717 scope.go:117] "RemoveContainer" containerID="a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.553973 4717 scope.go:117] "RemoveContainer" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:50:44 crc kubenswrapper[4717]: E0218 11:50:44.554237 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.568466 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.581192 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.597342 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.597385 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.597394 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.597408 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.597420 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:44Z","lastTransitionTime":"2026-02-18T11:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.598171 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.615445 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a68a2857afca2633dc31a102aff40a2779deaa38faa12285fbbbb6df9cac624b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"message\\\":\\\"\\\\nI0218 11:50:14.800637 6349 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 11:50:14.800662 6349 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 11:50:14.800648 6349 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 11:50:14.800685 6349 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 11:50:14.800699 6349 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 11:50:14.800713 6349 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 11:50:14.800725 6349 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 11:50:14.800758 6349 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 11:50:14.803437 6349 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 11:50:14.803578 6349 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 11:50:14.803608 6349 factory.go:656] Stopping watch factory\\\\nI0218 11:50:14.803627 6349 ovnkube.go:599] Stopped ovnkube\\\\nI0218 11:50:14.803676 6349 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 11:50:14.803691 6349 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 11:50:14.803709 6349 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 11:50:14.803827 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0218 11:50:43.866333 6743 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z]\\\\nI0218 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.627219 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.638603 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.647916 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.659569 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:34Z\\\",\\\"message\\\":\\\"2026-02-18T11:49:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661\\\\n2026-02-18T11:49:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661 to /host/opt/cni/bin/\\\\n2026-02-18T11:49:49Z [verbose] multus-daemon started\\\\n2026-02-18T11:49:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:50:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.669315 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"327579cb-11be-41d3-b3f0-d06e22ace6d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a14a2b95d0d214a2ddca91de7f1d1738e1b775b9c1d2372d683d77a91eb4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.686314 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.699998 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.700537 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.700674 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.700750 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.700828 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.700909 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:44Z","lastTransitionTime":"2026-02-18T11:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.712603 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.722315 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.733760 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.746075 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.756621 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.771723 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.782679 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:44Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.802801 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.802838 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.802849 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.802862 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.802871 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:44Z","lastTransitionTime":"2026-02-18T11:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.905171 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.905433 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.905514 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.905594 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.905729 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:44Z","lastTransitionTime":"2026-02-18T11:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:44 crc kubenswrapper[4717]: I0218 11:50:44.952347 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:32:08.878214476 +0000 UTC Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.008755 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.008799 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.008813 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.008829 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.008839 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:45Z","lastTransitionTime":"2026-02-18T11:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.036242 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:45 crc kubenswrapper[4717]: E0218 11:50:45.036384 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.111816 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.111880 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.111889 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.111902 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.111911 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:45Z","lastTransitionTime":"2026-02-18T11:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.214498 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.214547 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.214564 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.214578 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.214615 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:45Z","lastTransitionTime":"2026-02-18T11:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.317634 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.317678 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.317687 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.317701 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.317714 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:45Z","lastTransitionTime":"2026-02-18T11:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.420337 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.420401 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.420411 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.420425 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.420434 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:45Z","lastTransitionTime":"2026-02-18T11:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.523094 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.523132 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.523151 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.523166 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.523215 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:45Z","lastTransitionTime":"2026-02-18T11:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.557777 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/3.log" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.561884 4717 scope.go:117] "RemoveContainer" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:50:45 crc kubenswrapper[4717]: E0218 11:50:45.562044 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.578392 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.592718 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.605777 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.620161 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.626283 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.626323 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.626336 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.626355 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.626366 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:45Z","lastTransitionTime":"2026-02-18T11:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.640485 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0218 11:50:43.866333 6743 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z]\\\\nI0218 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.658483 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.674199 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.690117 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:34Z\\\",\\\"message\\\":\\\"2026-02-18T11:49:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661\\\\n2026-02-18T11:49:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661 to /host/opt/cni/bin/\\\\n2026-02-18T11:49:49Z [verbose] multus-daemon started\\\\n2026-02-18T11:49:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:50:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.709189 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.721498 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.728684 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.728709 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.728717 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.728730 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.728738 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:45Z","lastTransitionTime":"2026-02-18T11:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.735213 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.747945 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.761544 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.771049 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"327579cb-11be-41d3-b3f0-d06e22ace6d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a14a2b95d0d214a2ddca91de7f1d1738e1b775b9c1d2372d683d77a91eb4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.783513 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.797327 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.810590 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.822668 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:45Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.831426 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.831596 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.831658 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.831720 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.831795 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:45Z","lastTransitionTime":"2026-02-18T11:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.934704 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.934762 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.934775 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.934793 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.934805 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:45Z","lastTransitionTime":"2026-02-18T11:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:45 crc kubenswrapper[4717]: I0218 11:50:45.953129 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:05:25.513761629 +0000 UTC Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.036322 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:46 crc kubenswrapper[4717]: E0218 11:50:46.036470 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.036543 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.036892 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:46 crc kubenswrapper[4717]: E0218 11:50:46.037569 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:46 crc kubenswrapper[4717]: E0218 11:50:46.037731 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.038746 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.038818 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.038853 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.038883 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.038905 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:46Z","lastTransitionTime":"2026-02-18T11:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.049806 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.141609 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.141677 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.141702 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.141731 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.141753 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:46Z","lastTransitionTime":"2026-02-18T11:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.244032 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.244077 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.244089 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.244104 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.244114 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:46Z","lastTransitionTime":"2026-02-18T11:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.348426 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.348481 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.348498 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.348519 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.348534 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:46Z","lastTransitionTime":"2026-02-18T11:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.451481 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.451545 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.451562 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.451585 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.451602 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:46Z","lastTransitionTime":"2026-02-18T11:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.554355 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.554416 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.554435 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.554461 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.554482 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:46Z","lastTransitionTime":"2026-02-18T11:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.657205 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.657563 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.657662 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.657786 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.657870 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:46Z","lastTransitionTime":"2026-02-18T11:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.760681 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.760722 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.760736 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.760762 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.760787 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:46Z","lastTransitionTime":"2026-02-18T11:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.863249 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.863359 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.863374 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.863392 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.863404 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:46Z","lastTransitionTime":"2026-02-18T11:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.953978 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:23:32.708252274 +0000 UTC Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.966182 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.966230 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.966243 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.966276 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:46 crc kubenswrapper[4717]: I0218 11:50:46.966292 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:46Z","lastTransitionTime":"2026-02-18T11:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.036383 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:47 crc kubenswrapper[4717]: E0218 11:50:47.036512 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.049493 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.061483 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.068655 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.068690 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.068699 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.068712 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.068723 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:47Z","lastTransitionTime":"2026-02-18T11:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.073045 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.084022 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.094534 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.102532 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"327579cb-11be-41d3-b3f0-d06e22ace6d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a14a2b95d0d214a2ddca91de7f1d1738e1b775b9c1d2372d683d77a91eb4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.113667 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.128561 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.138201 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.149966 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.164196 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.170756 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.170789 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.170797 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.170810 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.170819 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:47Z","lastTransitionTime":"2026-02-18T11:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.182157 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0218 11:50:43.866333 6743 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z]\\\\nI0218 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.194019 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.205594 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.216476 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.233742 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9cbde91-c162-4c7b-a1d5-b941ae4c5b8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7518840ee42cef58d644b46ae1e1b80697b56f388a3527d3b121e9182e52603c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9cbf93f12390f9b3ccc4f8de94d2f42260dd9acd5ca40c8bd9ca970e51d9d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3caff189b204e7e933545efd5a5acbb2fd97003a683bb46e570d8cf4b2ba11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd037ce86d41c95ec34b977239e3d0293bb808c3c58a76c9e857182f5690e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa91e3cfc80d3351055790e7820fb376618b3b0f33e23e56bc56bca79031cbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://598bdc87459f43b34243a0e9d6ecd2e60de53b8b142a6ffa93096b514736f5ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598bdc87459f43b34243a0e9d6ecd2e60de53b8b142a6ffa93096b514736f5ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0cf6ea4909e64b0bded040d2949a469b5c2e17d01a5087c5e66f4301f353ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f0cf6ea4909e64b0bded040d2949a469b5c2e17d01a5087c5e66f4301f353ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9fa02f6578b7c706c5107415e36c2ba21826967190742b732b387c8853b10550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa02f6578b7c706c5107415e36c2ba21826967190742b732b387c8853b10550\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.247158 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:34Z\\\",\\\"message\\\":\\\"2026-02-18T11:49:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661\\\\n2026-02-18T11:49:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661 to /host/opt/cni/bin/\\\\n2026-02-18T11:49:49Z [verbose] multus-daemon started\\\\n2026-02-18T11:49:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:50:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.257531 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.266074 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:47Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.272638 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.272798 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.272821 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.272837 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.272868 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:47Z","lastTransitionTime":"2026-02-18T11:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.375326 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.375382 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.375435 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.375457 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.375471 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:47Z","lastTransitionTime":"2026-02-18T11:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.476801 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.476836 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.476847 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.476862 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.476873 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:47Z","lastTransitionTime":"2026-02-18T11:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.579074 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.579395 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.579404 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.579419 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.579427 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:47Z","lastTransitionTime":"2026-02-18T11:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.681723 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.681988 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.682056 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.682119 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.682177 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:47Z","lastTransitionTime":"2026-02-18T11:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.784839 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.784885 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.784895 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.784909 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.784918 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:47Z","lastTransitionTime":"2026-02-18T11:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.887194 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.887237 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.887326 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.887345 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.887408 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:47Z","lastTransitionTime":"2026-02-18T11:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.954322 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:40:56.291425043 +0000 UTC Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.989403 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.989450 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.989462 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.989478 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:47 crc kubenswrapper[4717]: I0218 11:50:47.989490 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:47Z","lastTransitionTime":"2026-02-18T11:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.036139 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.036228 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:48 crc kubenswrapper[4717]: E0218 11:50:48.036468 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.036179 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:48 crc kubenswrapper[4717]: E0218 11:50:48.036641 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:48 crc kubenswrapper[4717]: E0218 11:50:48.036968 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.091127 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.091162 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.091172 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.091186 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.091196 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:48Z","lastTransitionTime":"2026-02-18T11:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.193702 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.193745 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.193761 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.193781 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.193796 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:48Z","lastTransitionTime":"2026-02-18T11:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.296855 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.296924 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.296934 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.296950 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.296960 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:48Z","lastTransitionTime":"2026-02-18T11:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.398740 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.398783 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.398795 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.398810 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.398820 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:48Z","lastTransitionTime":"2026-02-18T11:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.501785 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.501847 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.501859 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.501874 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.501887 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:48Z","lastTransitionTime":"2026-02-18T11:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.604345 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.604378 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.604388 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.604405 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.604417 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:48Z","lastTransitionTime":"2026-02-18T11:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.707231 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.707315 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.707327 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.707344 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.707354 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:48Z","lastTransitionTime":"2026-02-18T11:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.809764 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.809799 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.809808 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.809821 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.809830 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:48Z","lastTransitionTime":"2026-02-18T11:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.911616 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.911699 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.911713 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.911729 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.911740 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:48Z","lastTransitionTime":"2026-02-18T11:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:48 crc kubenswrapper[4717]: I0218 11:50:48.956156 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:05:49.627872623 +0000 UTC Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.014168 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.014236 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.014251 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.014442 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.014483 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:49Z","lastTransitionTime":"2026-02-18T11:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.035607 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:49 crc kubenswrapper[4717]: E0218 11:50:49.035817 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.116711 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.116768 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.116785 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.116805 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.116821 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:49Z","lastTransitionTime":"2026-02-18T11:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.219646 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.219672 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.219681 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.219693 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.219702 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:49Z","lastTransitionTime":"2026-02-18T11:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.322169 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.322213 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.322225 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.322240 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.322252 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:49Z","lastTransitionTime":"2026-02-18T11:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.424482 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.424517 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.424525 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.424537 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.424546 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:49Z","lastTransitionTime":"2026-02-18T11:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.526433 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.526477 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.526491 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.526507 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.526518 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:49Z","lastTransitionTime":"2026-02-18T11:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.629151 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.629189 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.629200 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.629215 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.629224 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:49Z","lastTransitionTime":"2026-02-18T11:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.731990 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.732034 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.732044 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.732063 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.732074 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:49Z","lastTransitionTime":"2026-02-18T11:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.834501 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.834552 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.834562 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.834578 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.834589 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:49Z","lastTransitionTime":"2026-02-18T11:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.937334 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.937381 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.937390 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.937404 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.937416 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:49Z","lastTransitionTime":"2026-02-18T11:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:49 crc kubenswrapper[4717]: I0218 11:50:49.956706 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:58:03.363060327 +0000 UTC Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.036431 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.036487 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.036528 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.036577 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.036636 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.036732 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.040238 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.040299 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.040316 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.040335 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.040345 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:50Z","lastTransitionTime":"2026-02-18T11:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.142885 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.142939 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.142949 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.142979 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.142988 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:50Z","lastTransitionTime":"2026-02-18T11:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.246728 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.246767 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.246777 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.246792 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.246805 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:50Z","lastTransitionTime":"2026-02-18T11:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.349795 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.349843 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.349854 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.349870 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.349883 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:50Z","lastTransitionTime":"2026-02-18T11:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.451941 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.451980 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.451990 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.452005 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.452017 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:50Z","lastTransitionTime":"2026-02-18T11:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.554306 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.554876 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.555207 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.555519 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.555598 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:50Z","lastTransitionTime":"2026-02-18T11:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.658380 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.658412 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.658420 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.658432 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.658440 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:50Z","lastTransitionTime":"2026-02-18T11:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.709249 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.709459 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.709436428 +0000 UTC m=+149.111537744 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.760004 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.760048 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.760062 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.760076 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.760086 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:50Z","lastTransitionTime":"2026-02-18T11:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.810455 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.810502 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.810545 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.810579 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810579 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810604 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810616 4717 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810645 4717 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810689 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.810676242 +0000 UTC m=+149.212777558 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810704 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.810698463 +0000 UTC m=+149.212799779 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810724 4717 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810758 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810801 4717 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810814 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.810795235 +0000 UTC m=+149.212896551 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810816 4717 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:50 crc kubenswrapper[4717]: E0218 11:50:50.810877 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.810858377 +0000 UTC m=+149.212959783 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.862204 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.862243 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.862288 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.862308 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.862321 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:50Z","lastTransitionTime":"2026-02-18T11:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.957026 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:24:30.931371283 +0000 UTC Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.964715 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.964750 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.964761 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.964776 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:50 crc kubenswrapper[4717]: I0218 11:50:50.964787 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:50Z","lastTransitionTime":"2026-02-18T11:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.036218 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:51 crc kubenswrapper[4717]: E0218 11:50:51.036369 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.067150 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.067179 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.067187 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.067198 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.067206 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:51Z","lastTransitionTime":"2026-02-18T11:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.169759 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.169790 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.169800 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.169812 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.169821 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:51Z","lastTransitionTime":"2026-02-18T11:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.272219 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.272246 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.272277 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.272289 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.272298 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:51Z","lastTransitionTime":"2026-02-18T11:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.374491 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.374542 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.374554 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.374570 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.374582 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:51Z","lastTransitionTime":"2026-02-18T11:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.477409 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.477443 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.477451 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.477464 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.477472 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:51Z","lastTransitionTime":"2026-02-18T11:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.584501 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.584559 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.584571 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.584587 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.584598 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:51Z","lastTransitionTime":"2026-02-18T11:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.686619 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.686994 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.687129 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.687307 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.687424 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:51Z","lastTransitionTime":"2026-02-18T11:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.790230 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.790290 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.790303 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.790318 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.790328 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:51Z","lastTransitionTime":"2026-02-18T11:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.892940 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.892974 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.892985 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.893007 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.893018 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:51Z","lastTransitionTime":"2026-02-18T11:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.957379 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:05:32.047694888 +0000 UTC Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.995182 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.995213 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.995220 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.995232 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:51 crc kubenswrapper[4717]: I0218 11:50:51.995242 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:51Z","lastTransitionTime":"2026-02-18T11:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.035738 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.035786 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.035786 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:52 crc kubenswrapper[4717]: E0218 11:50:52.035905 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:52 crc kubenswrapper[4717]: E0218 11:50:52.035988 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:52 crc kubenswrapper[4717]: E0218 11:50:52.036071 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.098059 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.098094 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.098105 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.098121 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.098133 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:52Z","lastTransitionTime":"2026-02-18T11:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.201051 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.201094 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.201128 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.201145 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.201156 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:52Z","lastTransitionTime":"2026-02-18T11:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.303903 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.303944 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.303953 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.303969 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.303980 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:52Z","lastTransitionTime":"2026-02-18T11:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.405692 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.405756 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.405766 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.405779 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.405792 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:52Z","lastTransitionTime":"2026-02-18T11:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.508382 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.508419 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.508431 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.508445 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.508455 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:52Z","lastTransitionTime":"2026-02-18T11:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.610593 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.610646 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.610656 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.610670 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.610678 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:52Z","lastTransitionTime":"2026-02-18T11:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.713314 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.713351 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.713359 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.713374 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.713383 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:52Z","lastTransitionTime":"2026-02-18T11:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.815153 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.815189 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.815198 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.815210 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.815217 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:52Z","lastTransitionTime":"2026-02-18T11:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.917231 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.917289 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.917303 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.917318 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.917330 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:52Z","lastTransitionTime":"2026-02-18T11:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:52 crc kubenswrapper[4717]: I0218 11:50:52.958021 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:22:03.675911027 +0000 UTC Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.020436 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.020482 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.020494 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.020510 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.020521 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.035933 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:53 crc kubenswrapper[4717]: E0218 11:50:53.036096 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.122856 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.122896 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.122914 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.122930 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.122941 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.224939 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.225010 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.225033 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.225063 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.225085 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.328086 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.328586 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.328608 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.328639 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.328661 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.431641 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.431695 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.431710 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.431725 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.431735 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.534386 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.534467 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.534477 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.534492 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.534503 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.637200 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.637291 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.637304 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.637320 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.637332 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.739709 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.739754 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.739763 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.739777 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.739787 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.775398 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.775442 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.775464 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.775480 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.775491 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: E0218 11:50:53.786533 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.790080 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.790134 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.790148 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.790165 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.790177 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: E0218 11:50:53.801523 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.804891 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.804934 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.804946 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.804963 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.804974 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: E0218 11:50:53.816539 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.821105 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.821145 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.821156 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.821173 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.821209 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: E0218 11:50:53.832397 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.839961 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.840036 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.840052 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.840071 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.840473 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: E0218 11:50:53.851967 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:53Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:53 crc kubenswrapper[4717]: E0218 11:50:53.852083 4717 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.853879 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.853920 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.853930 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.853943 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.853952 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.955957 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.956003 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.956017 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.956034 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.956046 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:53Z","lastTransitionTime":"2026-02-18T11:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:53 crc kubenswrapper[4717]: I0218 11:50:53.959170 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:26:15.372453665 +0000 UTC Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.035767 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.035815 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.035787 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:54 crc kubenswrapper[4717]: E0218 11:50:54.035921 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:54 crc kubenswrapper[4717]: E0218 11:50:54.036012 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:54 crc kubenswrapper[4717]: E0218 11:50:54.036106 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.058686 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.058727 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.058739 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.058753 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.058761 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:54Z","lastTransitionTime":"2026-02-18T11:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.160515 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.160565 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.160581 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.160603 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.160619 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:54Z","lastTransitionTime":"2026-02-18T11:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.264241 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.264302 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.264313 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.264330 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.264342 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:54Z","lastTransitionTime":"2026-02-18T11:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.366880 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.366920 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.366929 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.366942 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.366951 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:54Z","lastTransitionTime":"2026-02-18T11:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.469957 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.470027 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.470045 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.470067 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.470080 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:54Z","lastTransitionTime":"2026-02-18T11:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.572360 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.572408 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.572420 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.572437 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.572452 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:54Z","lastTransitionTime":"2026-02-18T11:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.674364 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.674577 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.674607 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.674627 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.674642 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:54Z","lastTransitionTime":"2026-02-18T11:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.777056 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.777118 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.777128 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.777147 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.777160 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:54Z","lastTransitionTime":"2026-02-18T11:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.879149 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.879190 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.879200 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.879214 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.879225 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:54Z","lastTransitionTime":"2026-02-18T11:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.959966 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:04:01.519545416 +0000 UTC Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.982323 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.982361 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.982371 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.982384 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:54 crc kubenswrapper[4717]: I0218 11:50:54.982394 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:54Z","lastTransitionTime":"2026-02-18T11:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.036168 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:55 crc kubenswrapper[4717]: E0218 11:50:55.036775 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.085459 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.085520 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.085533 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.085554 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.085567 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:55Z","lastTransitionTime":"2026-02-18T11:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.188422 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.188458 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.188471 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.188488 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.188500 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:55Z","lastTransitionTime":"2026-02-18T11:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.292083 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.292119 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.292131 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.292144 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.292153 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:55Z","lastTransitionTime":"2026-02-18T11:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.395678 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.395726 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.395735 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.395752 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.395761 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:55Z","lastTransitionTime":"2026-02-18T11:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.498158 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.498211 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.498233 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.498279 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.498302 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:55Z","lastTransitionTime":"2026-02-18T11:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.601054 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.601126 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.601139 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.601164 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.601179 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:55Z","lastTransitionTime":"2026-02-18T11:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.703747 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.703828 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.703838 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.703854 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.703863 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:55Z","lastTransitionTime":"2026-02-18T11:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.806767 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.806814 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.806833 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.806850 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.806859 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:55Z","lastTransitionTime":"2026-02-18T11:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.910054 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.911206 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.911453 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.911532 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.911634 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:55Z","lastTransitionTime":"2026-02-18T11:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:55 crc kubenswrapper[4717]: I0218 11:50:55.960542 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:29:19.06448772 +0000 UTC Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.013865 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.014147 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.014215 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.014329 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.014410 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:56Z","lastTransitionTime":"2026-02-18T11:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.036157 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.036323 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.036367 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:56 crc kubenswrapper[4717]: E0218 11:50:56.036544 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:56 crc kubenswrapper[4717]: E0218 11:50:56.036715 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:56 crc kubenswrapper[4717]: E0218 11:50:56.036909 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.117456 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.117497 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.117505 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.117518 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.117528 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:56Z","lastTransitionTime":"2026-02-18T11:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.219681 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.219721 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.219730 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.219745 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.219755 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:56Z","lastTransitionTime":"2026-02-18T11:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.321839 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.321873 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.321882 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.321894 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.321913 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:56Z","lastTransitionTime":"2026-02-18T11:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.425231 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.425291 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.425303 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.425318 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.425331 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:56Z","lastTransitionTime":"2026-02-18T11:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.528100 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.528138 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.528147 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.528162 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.528172 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:56Z","lastTransitionTime":"2026-02-18T11:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.630624 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.630683 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.630699 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.630723 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.630737 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:56Z","lastTransitionTime":"2026-02-18T11:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.734064 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.734118 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.734130 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.734149 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.734161 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:56Z","lastTransitionTime":"2026-02-18T11:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.836961 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.837004 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.837012 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.837025 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.837034 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:56Z","lastTransitionTime":"2026-02-18T11:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.939590 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.939631 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.939644 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.939661 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.939885 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:56Z","lastTransitionTime":"2026-02-18T11:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:56 crc kubenswrapper[4717]: I0218 11:50:56.961470 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:47:01.618120278 +0000 UTC Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.035669 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:57 crc kubenswrapper[4717]: E0218 11:50:57.035820 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.041409 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.041447 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.041456 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.041471 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.041482 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:57Z","lastTransitionTime":"2026-02-18T11:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.046994 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.057354 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.068400 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.085346 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9cbde91-c162-4c7b-a1d5-b941ae4c5b8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7518840ee42cef58d644b46ae1e1b80697b56f388a3527d3b121e9182e52603c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9cbf93f12390f9b3ccc4f8de94d2f42260dd9acd5ca40c8bd9ca970e51d9d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3caff189b204e7e933545efd5a5acbb2fd97003a683bb46e570d8cf4b2ba11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd037ce86d41c95ec34b977239e3d0293bb808c3c58a76c9e857182f5690e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa91e3cfc80d3351055790e7820fb376618b3b0f33e23e56bc56bca79031cbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://598bdc87459f43b34243a0e9d6ecd2e60de53b8b142a6ffa93096b514736f5ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598bdc87459f43b34243a0e9d6ecd2e60de53b8b142a6ffa93096b514736f5ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0cf6ea4909e64b0bded040d2949a469b5c2e17d01a5087c5e66f4301f353ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f0cf6ea4909e64b0bded040d2949a469b5c2e17d01a5087c5e66f4301f353ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9fa02f6578b7c706c5107415e36c2ba21826967190742b732b387c8853b10550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa02f6578b7c706c5107415e36c2ba21826967190742b732b387c8853b10550\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.097042 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:34Z\\\",\\\"message\\\":\\\"2026-02-18T11:49:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661\\\\n2026-02-18T11:49:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661 to /host/opt/cni/bin/\\\\n2026-02-18T11:49:49Z [verbose] multus-daemon started\\\\n2026-02-18T11:49:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:50:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.106719 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"327579cb-11be-41d3-b3f0-d06e22ace6d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a14a2b95d0d214a2ddca91de7f1d1738e1b775b9c1d2372d683d77a91eb4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.121491 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.136466 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.143390 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.143437 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.143454 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.143473 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.143485 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:57Z","lastTransitionTime":"2026-02-18T11:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.151913 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.163059 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.174568 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.186946 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.197195 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.207833 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.217555 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.228958 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.241379 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.245441 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.245498 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.245509 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.245524 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.245554 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:57Z","lastTransitionTime":"2026-02-18T11:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.256046 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.274052 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0218 11:50:43.866333 6743 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z]\\\\nI0218 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:57Z is after 2025-08-24T17:21:41Z" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.347535 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.347581 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.347590 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.347607 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.347685 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:57Z","lastTransitionTime":"2026-02-18T11:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.450797 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.450864 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.450882 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.450906 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.450918 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:57Z","lastTransitionTime":"2026-02-18T11:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.556319 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.556355 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.556370 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.556385 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.556394 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:57Z","lastTransitionTime":"2026-02-18T11:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.659252 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.660207 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.660335 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.660419 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.660489 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:57Z","lastTransitionTime":"2026-02-18T11:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.763035 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.763071 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.763081 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.763098 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.763108 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:57Z","lastTransitionTime":"2026-02-18T11:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.865875 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.865905 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.865914 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.865926 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.865935 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:57Z","lastTransitionTime":"2026-02-18T11:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.961952 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:04:21.870708371 +0000 UTC Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.968796 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.968829 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.968839 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.968851 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:57 crc kubenswrapper[4717]: I0218 11:50:57.968859 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:57Z","lastTransitionTime":"2026-02-18T11:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.035983 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:50:58 crc kubenswrapper[4717]: E0218 11:50:58.036337 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.036003 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:50:58 crc kubenswrapper[4717]: E0218 11:50:58.036481 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.035984 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:50:58 crc kubenswrapper[4717]: E0218 11:50:58.036543 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.071733 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.072010 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.072115 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.072199 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.072311 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:58Z","lastTransitionTime":"2026-02-18T11:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.174508 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.174551 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.174564 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.174579 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.174589 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:58Z","lastTransitionTime":"2026-02-18T11:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.277396 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.277463 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.277509 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.277536 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.277550 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:58Z","lastTransitionTime":"2026-02-18T11:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.379890 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.379929 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.379937 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.379951 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.379960 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:58Z","lastTransitionTime":"2026-02-18T11:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.482172 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.482219 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.482234 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.482251 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.482279 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:58Z","lastTransitionTime":"2026-02-18T11:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.585018 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.585080 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.585107 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.585129 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.585169 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:58Z","lastTransitionTime":"2026-02-18T11:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.688379 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.688420 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.688433 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.688447 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.688459 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:58Z","lastTransitionTime":"2026-02-18T11:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.790429 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.790481 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.790496 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.790513 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.790528 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:58Z","lastTransitionTime":"2026-02-18T11:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.893317 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.893363 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.893380 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.893400 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.893415 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:58Z","lastTransitionTime":"2026-02-18T11:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.962431 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:13:57.721360879 +0000 UTC Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.995279 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.995315 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.995323 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.995337 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:58 crc kubenswrapper[4717]: I0218 11:50:58.995346 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:58Z","lastTransitionTime":"2026-02-18T11:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.036197 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:50:59 crc kubenswrapper[4717]: E0218 11:50:59.036381 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.098273 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.098319 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.098330 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.098347 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.098365 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:59Z","lastTransitionTime":"2026-02-18T11:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.200646 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.200689 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.200701 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.200719 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.200730 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:59Z","lastTransitionTime":"2026-02-18T11:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.303041 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.303113 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.303127 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.303144 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.303156 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:59Z","lastTransitionTime":"2026-02-18T11:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.405311 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.405349 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.405359 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.405374 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.405385 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:59Z","lastTransitionTime":"2026-02-18T11:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.507658 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.507955 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.508023 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.508102 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.508172 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:59Z","lastTransitionTime":"2026-02-18T11:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.610705 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.611088 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.611212 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.611353 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.611479 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:59Z","lastTransitionTime":"2026-02-18T11:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.714129 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.714169 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.714180 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.714197 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.714209 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:59Z","lastTransitionTime":"2026-02-18T11:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.843345 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.843695 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.843831 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.843946 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.844046 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:59Z","lastTransitionTime":"2026-02-18T11:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.946291 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.946334 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.946347 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.946366 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.946378 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:50:59Z","lastTransitionTime":"2026-02-18T11:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:50:59 crc kubenswrapper[4717]: I0218 11:50:59.963136 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:51:44.897370541 +0000 UTC Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.036353 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:00 crc kubenswrapper[4717]: E0218 11:51:00.036979 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.036636 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.036598 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:00 crc kubenswrapper[4717]: E0218 11:51:00.037661 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:00 crc kubenswrapper[4717]: E0218 11:51:00.037838 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.038041 4717 scope.go:117] "RemoveContainer" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:51:00 crc kubenswrapper[4717]: E0218 11:51:00.043548 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.049633 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.049708 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.049730 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.049751 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.049765 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:00Z","lastTransitionTime":"2026-02-18T11:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.152639 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.153098 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.153284 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.153392 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.153489 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:00Z","lastTransitionTime":"2026-02-18T11:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.256530 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.256575 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.256583 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.256598 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.256606 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:00Z","lastTransitionTime":"2026-02-18T11:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.358404 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.358448 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.358458 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.358472 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.358481 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:00Z","lastTransitionTime":"2026-02-18T11:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.461087 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.462847 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.462910 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.463001 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.463028 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:00Z","lastTransitionTime":"2026-02-18T11:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.566180 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.566236 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.566251 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.566281 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.566291 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:00Z","lastTransitionTime":"2026-02-18T11:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.669504 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.669591 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.669719 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.670300 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.670377 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:00Z","lastTransitionTime":"2026-02-18T11:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.773020 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.773051 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.773059 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.773072 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.773080 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:00Z","lastTransitionTime":"2026-02-18T11:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.876931 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.876980 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.876992 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.877011 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.877022 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:00Z","lastTransitionTime":"2026-02-18T11:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.963896 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:37:03.886084002 +0000 UTC Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.980037 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.980247 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.980374 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.980473 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:00 crc kubenswrapper[4717]: I0218 11:51:00.980549 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:00Z","lastTransitionTime":"2026-02-18T11:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.036380 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:01 crc kubenswrapper[4717]: E0218 11:51:01.037156 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.084388 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.084457 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.084480 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.084507 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.084529 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:01Z","lastTransitionTime":"2026-02-18T11:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.187580 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.187626 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.187640 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.187656 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.187669 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:01Z","lastTransitionTime":"2026-02-18T11:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.290398 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.290445 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.290457 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.290477 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.290491 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:01Z","lastTransitionTime":"2026-02-18T11:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.392944 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.392997 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.393006 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.393050 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.393063 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:01Z","lastTransitionTime":"2026-02-18T11:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.495551 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.495589 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.495602 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.495617 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.495628 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:01Z","lastTransitionTime":"2026-02-18T11:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.598878 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.598959 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.598980 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.599006 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.599025 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:01Z","lastTransitionTime":"2026-02-18T11:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.701438 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.701512 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.701524 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.701538 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.701548 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:01Z","lastTransitionTime":"2026-02-18T11:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.803507 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.803547 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.803558 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.803572 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.803582 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:01Z","lastTransitionTime":"2026-02-18T11:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.906077 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.906130 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.906139 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.906152 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.906182 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:01Z","lastTransitionTime":"2026-02-18T11:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:01 crc kubenswrapper[4717]: I0218 11:51:01.965249 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:11:25.907307314 +0000 UTC Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.008058 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.008249 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.008285 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.008299 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.008308 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:02Z","lastTransitionTime":"2026-02-18T11:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.035688 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.035755 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.035828 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:02 crc kubenswrapper[4717]: E0218 11:51:02.035832 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:02 crc kubenswrapper[4717]: E0218 11:51:02.035879 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:02 crc kubenswrapper[4717]: E0218 11:51:02.035929 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.111503 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.111553 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.111561 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.111575 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.111582 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:02Z","lastTransitionTime":"2026-02-18T11:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.213412 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.213444 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.213453 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.213466 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.213474 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:02Z","lastTransitionTime":"2026-02-18T11:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.316059 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.316089 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.316098 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.316110 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.316119 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:02Z","lastTransitionTime":"2026-02-18T11:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.418432 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.418486 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.418496 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.418511 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.418521 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:02Z","lastTransitionTime":"2026-02-18T11:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.527415 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.527892 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.528069 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.528210 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.528321 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:02Z","lastTransitionTime":"2026-02-18T11:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.631913 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.631966 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.631976 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.631994 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.632006 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:02Z","lastTransitionTime":"2026-02-18T11:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.734703 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.734745 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.734756 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.734772 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.734784 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:02Z","lastTransitionTime":"2026-02-18T11:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.837421 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.837462 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.837471 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.837486 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.837498 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:02Z","lastTransitionTime":"2026-02-18T11:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.940673 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.940721 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.940730 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.940746 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.940760 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:02Z","lastTransitionTime":"2026-02-18T11:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:02 crc kubenswrapper[4717]: I0218 11:51:02.965707 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:33:35.861978412 +0000 UTC Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.036874 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:03 crc kubenswrapper[4717]: E0218 11:51:03.037412 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.043368 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.043427 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.043442 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.043462 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.043481 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:03Z","lastTransitionTime":"2026-02-18T11:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.147306 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.147357 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.147373 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.147400 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.147423 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:03Z","lastTransitionTime":"2026-02-18T11:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.250331 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.250395 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.250413 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.250437 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.250454 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:03Z","lastTransitionTime":"2026-02-18T11:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.353146 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.353441 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.353462 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.353487 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.353511 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:03Z","lastTransitionTime":"2026-02-18T11:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.456386 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.456863 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.456951 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.457059 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.457146 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:03Z","lastTransitionTime":"2026-02-18T11:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.560448 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.560502 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.560517 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.560543 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.560555 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:03Z","lastTransitionTime":"2026-02-18T11:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.664037 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.664549 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.664651 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.664733 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.664791 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:03Z","lastTransitionTime":"2026-02-18T11:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.768085 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.768133 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.768146 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.768168 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.768181 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:03Z","lastTransitionTime":"2026-02-18T11:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.870739 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.870792 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.870802 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.870820 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.870833 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:03Z","lastTransitionTime":"2026-02-18T11:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.966955 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:12:05.650562258 +0000 UTC Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.973163 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.973210 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.973221 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.973239 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:03 crc kubenswrapper[4717]: I0218 11:51:03.973250 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:03Z","lastTransitionTime":"2026-02-18T11:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.036523 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.036580 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.036664 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.036672 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.036981 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.037085 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.076082 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.076124 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.076135 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.076151 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.076165 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.079995 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.080021 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.080033 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.080046 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.080057 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.095156 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.100279 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.100324 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.100337 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.100354 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.100364 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.112206 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.117046 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.117068 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.117076 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.117088 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.117097 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.135875 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.140528 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.140570 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.140579 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.140594 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.140606 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.150012 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.150240 4717 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.150412 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs podName:a549f413-5b44-4fac-a21e-4f41cc30fbe6 nodeName:}" failed. No retries permitted until 2026-02-18 11:52:08.150370115 +0000 UTC m=+162.552471501 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs") pod "network-metrics-daemon-gxzpl" (UID: "a549f413-5b44-4fac-a21e-4f41cc30fbe6") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.151984 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.157678 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.157715 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.157726 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.157740 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.157752 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.171057 4717 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T11:51:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1a04f158-1706-4e05-bc60-cb864ebb382f\\\",\\\"systemUUID\\\":\\\"956ecb2c-bb9d-4b7e-b56f-b439ce483321\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:04Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:04 crc kubenswrapper[4717]: E0218 11:51:04.171216 4717 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.178711 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.178741 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.178752 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.178769 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.178781 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.281081 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.281130 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.281141 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.281155 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.281166 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.383986 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.384023 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.384032 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.384049 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.384059 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.486901 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.486936 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.486946 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.486957 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.486965 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.588989 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.589040 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.589055 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.589072 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.589085 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.691317 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.691360 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.691371 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.691387 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.691396 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.793292 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.793329 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.793345 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.793361 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.793371 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.895803 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.895865 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.895884 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.895903 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.895915 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.967654 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:10:18.445514417 +0000 UTC Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.998536 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.998587 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.998604 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.998624 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:04 crc kubenswrapper[4717]: I0218 11:51:04.998639 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:04Z","lastTransitionTime":"2026-02-18T11:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.035903 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:05 crc kubenswrapper[4717]: E0218 11:51:05.036495 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.101347 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.101384 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.101395 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.101411 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.101422 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:05Z","lastTransitionTime":"2026-02-18T11:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.203695 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.203732 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.203748 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.203766 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.203780 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:05Z","lastTransitionTime":"2026-02-18T11:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.305971 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.306003 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.306010 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.306023 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.306031 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:05Z","lastTransitionTime":"2026-02-18T11:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.408648 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.408713 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.408728 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.408748 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.408760 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:05Z","lastTransitionTime":"2026-02-18T11:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.511781 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.511823 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.511835 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.511852 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.511867 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:05Z","lastTransitionTime":"2026-02-18T11:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.615120 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.615162 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.615174 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.615192 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.615204 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:05Z","lastTransitionTime":"2026-02-18T11:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.717734 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.717772 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.717784 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.717801 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.717813 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:05Z","lastTransitionTime":"2026-02-18T11:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.820413 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.820746 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.820823 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.820894 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.820960 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:05Z","lastTransitionTime":"2026-02-18T11:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.923446 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.924197 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.924325 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.924436 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.924574 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:05Z","lastTransitionTime":"2026-02-18T11:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:05 crc kubenswrapper[4717]: I0218 11:51:05.967766 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:51:29.040063925 +0000 UTC Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.027314 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.027597 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.027700 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.027776 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.027843 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:06Z","lastTransitionTime":"2026-02-18T11:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.035624 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.035668 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.035802 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:06 crc kubenswrapper[4717]: E0218 11:51:06.035897 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:06 crc kubenswrapper[4717]: E0218 11:51:06.036033 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:06 crc kubenswrapper[4717]: E0218 11:51:06.036129 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.130297 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.130332 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.130342 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.130356 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.130366 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:06Z","lastTransitionTime":"2026-02-18T11:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.233523 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.233865 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.233948 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.234030 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.234118 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:06Z","lastTransitionTime":"2026-02-18T11:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.336960 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.337004 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.337014 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.337030 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.337041 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:06Z","lastTransitionTime":"2026-02-18T11:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.439496 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.439537 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.439547 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.439559 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.439568 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:06Z","lastTransitionTime":"2026-02-18T11:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.541825 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.541865 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.541886 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.541908 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.541919 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:06Z","lastTransitionTime":"2026-02-18T11:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.644905 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.644957 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.644970 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.644990 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.645004 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:06Z","lastTransitionTime":"2026-02-18T11:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.747984 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.748048 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.748063 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.748080 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.748094 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:06Z","lastTransitionTime":"2026-02-18T11:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.851877 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.851921 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.851930 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.851946 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.851957 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:06Z","lastTransitionTime":"2026-02-18T11:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.954406 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.954456 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.954465 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.954482 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.954495 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:06Z","lastTransitionTime":"2026-02-18T11:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:06 crc kubenswrapper[4717]: I0218 11:51:06.968779 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:03:39.57944121 +0000 UTC Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.036518 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:07 crc kubenswrapper[4717]: E0218 11:51:07.036829 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.052065 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.056937 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.056991 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.057003 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.057021 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.057032 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:07Z","lastTransitionTime":"2026-02-18T11:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.066063 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.078999 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a92f4826-4ec6-4676-977c-fdf3552b9ea5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a02de57c186ed359ff9308d823324380d73cbef1f6a92b3003bc9473c0435958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3243c34e60a30eb3b454953dadad7f2a8c3a1ecca55178aa20c9cf20289dc2c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nqztm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.093539 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd47c4cd-87e5-4925-9c7f-79d0b4f5f18f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d7b9d54beef2a3f49f07526add1481878ab16980441239b20b6b02e4cb1bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://420e1fc665e73bef90cbe2e7cf5f59d841652281d7c6f75d243316193d3dde55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efa130bec069d4bfb2cd9a6c23eada4d6a8e4976b888795f537242f3f3d49bd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.107945 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7384bb-88b8-4289-bb8b-75f73c4aa836\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d69511a7f432f28b50125317816ad4277c99af230bedea36b51eb065df9550b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4613447ce28f5f823dca2464f6a32d4b53a6edcffdab8b14b6bee24ad063a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa4b77ec7db777a423715817b1109136588611133bfce104406dd19d01764f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2ff1a1df247af0af32d54e2fcca03923199363109ed674259c04067dcdcfa7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.120791 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"327579cb-11be-41d3-b3f0-d06e22ace6d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a14a2b95d0d214a2ddca91de7f1d1738e1b775b9c1d2372d683d77a91eb4e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b38a1e2a7f6ac4dd14690013c73f68d4c9e2c7143f9edad2dbc9907d4669c98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.135485 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10bd33aa-1784-4c9c-aaae-2c0df3304785\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T11:49:45Z\\\",\\\"message\\\":\\\"serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771415370\\\\\\\\\\\\\\\" (2026-02-18 11:49:29 +0000 UTC to 2026-03-20 11:49:30 +0000 UTC (now=2026-02-18 11:49:45.959609913 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959705 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959720 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 11:49:45.959738 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 11:49:45.959745 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 11:49:45.959770 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771415371\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771415370\\\\\\\\\\\\\\\" (2026-02-18 10:49:30 +0000 UTC to 2027-02-18 10:49:30 +0000 UTC (now=2026-02-18 11:49:45.959748307 +0000 UTC))\\\\\\\"\\\\nI0218 11:49:45.959789 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 11:49:45.959809 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 11:49:45.959827 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3314714654/tls.crt::/tmp/serving-cert-3314714654/tls.key\\\\\\\"\\\\nI0218 11:49:45.959864 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 11:49:45.959878 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 11:49:45.960206 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0218 11:49:45.960762 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.149204 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3c80b61c103d2f665f4f7f08e06259e85bdbcc4f0b84102366f282ea999f316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.159159 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.159210 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.159228 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.159251 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.159336 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:07Z","lastTransitionTime":"2026-02-18T11:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.161301 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xmb4p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d2545cd7-d1a5-4248-a0e1-eb6f07f0023e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2750752fb268a7b53edb9cf70d528076be4f2c68ae26a02deafb9242aa07e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbz4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xmb4p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.173647 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"823580ef-975b-4298-955b-fb3c0b5fefc3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bc2eacd6cf3fb351fa83dc74b7c6deb9ba43075273b12b544a011bde71c8279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t997l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5wbk5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.209344 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-s242q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed61105a-bc90-46a4-991f-466e6836d94d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea3035a0f6654faef25b3d398a4791705a94cc23df32d838ee883b8b4808cd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85d75ac0c7251b9eb4bed4032f120aace65947cce9d8e19d7e500adf57fcf0aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1adffe0cb4c770740d4255b9d0d3603ce30b8251cea94d2db5a3995d97b2830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a6f26935ada2bdc4f3ab362cf8f4df901696e2b763b41ac8f75458aa1ea201b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663836549bb06daf2eefaf9d47cc6768e50635691bae6e3e4d8d75d24cfcb9a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63b183e43c5c2b46ccfb5313cd059ba04a551db6eaad9068b952f6fea06e7089\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6046b048a3583ebf8f8bf21a8f50522630ff7dd831d3571bb0a206ea623dfd88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9pmp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-s242q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.242005 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:43Z\\\",\\\"message\\\":\\\"[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/community-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.189\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0218 11:50:43.866333 6743 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:50:43Z is after 2025-08-24T17:21:41Z]\\\\nI0218 11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:50:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xk2ht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2fh5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.254682 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5e7f9c5c9435d94cad5ae637f4357be7c3a1286e590f47884820262a610a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b325043418d4d1fd241ee5ff4e1eb33a4273307ea77d5d8307369c17620ee7ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.261653 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.261693 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.261709 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.261731 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.261746 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:07Z","lastTransitionTime":"2026-02-18T11:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.270124 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.279188 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a549f413-5b44-4fac-a21e-4f41cc30fbe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vj2pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:50:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gxzpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.303147 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9cbde91-c162-4c7b-a1d5-b941ae4c5b8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7518840ee42cef58d644b46ae1e1b80697b56f388a3527d3b121e9182e52603c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9cbf93f12390f9b3ccc4f8de94d2f42260dd9acd5ca40c8bd9ca970e51d9d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3caff189b204e7e933545efd5a5acbb2fd97003a683bb46e570d8cf4b2ba11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd037ce86d41c95ec34b977239e3d0293bb808c3c58a76c9e857182f5690e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa91e3cfc80d3351055790e7820fb376618b3b0f33e23e56bc56bca79031cbc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://598bdc87459f43b34243a0e9d6ecd2e60de53b8b142a6ffa93096b514736f5ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://598bdc87459f43b34243a0e9d6ecd2e60de53b8b142a6ffa93096b514736f5ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f0cf6ea4909e64b0bded040d2949a469b5c2e17d01a5087c5e66f4301f353ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f0cf6ea4909e64b0bded040d2949a469b5c2e17d01a5087c5e66f4301f353ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9fa02f6578b7c706c5107415e36c2ba21826967190742b732b387c8853b10550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa02f6578b7c706c5107415e36c2ba21826967190742b732b387c8853b10550\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T11:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.314969 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hvktx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41f72a5f-4820-4dc2-a6c5-243550881aaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T11:50:34Z\\\",\\\"message\\\":\\\"2026-02-18T11:49:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661\\\\n2026-02-18T11:49:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10776e54-302e-4c30-a6b0-14d53023c661 to /host/opt/cni/bin/\\\\n2026-02-18T11:49:49Z [verbose] multus-daemon started\\\\n2026-02-18T11:49:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T11:50:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T11:49:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:50:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z7rtd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hvktx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.327742 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee1c3ffd2a00b65222b1ab3dcffb14ff98f5788cc891670b29e4945f8e5781e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.339098 4717 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dzfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba9a5cc-35c1-47ea-b225-1b57b40a5e0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T11:49:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38e21d9d3bd66c882b0ddd2d1e4255574008c4e65b5283e6e4455a146bbff2b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T11:49:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fj7cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T11:49:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dzfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T11:51:07Z is after 2025-08-24T17:21:41Z" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.364674 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.364935 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.364998 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.365065 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.365124 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:07Z","lastTransitionTime":"2026-02-18T11:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.467493 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.467527 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.467535 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.467548 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.467556 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:07Z","lastTransitionTime":"2026-02-18T11:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.569564 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.569605 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.569616 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.569631 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.569644 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:07Z","lastTransitionTime":"2026-02-18T11:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.672369 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.672419 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.672434 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.672453 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.672467 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:07Z","lastTransitionTime":"2026-02-18T11:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.775016 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.775396 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.775468 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.775644 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.775767 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:07Z","lastTransitionTime":"2026-02-18T11:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.878011 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.878276 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.878370 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.878454 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.878557 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:07Z","lastTransitionTime":"2026-02-18T11:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.969874 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:08:18.962893172 +0000 UTC Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.980592 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.980617 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.980625 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.980637 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:07 crc kubenswrapper[4717]: I0218 11:51:07.980645 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:07Z","lastTransitionTime":"2026-02-18T11:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.036292 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:08 crc kubenswrapper[4717]: E0218 11:51:08.036443 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.036499 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.036545 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:08 crc kubenswrapper[4717]: E0218 11:51:08.036746 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:08 crc kubenswrapper[4717]: E0218 11:51:08.036559 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.083623 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.083988 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.084088 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.084185 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.084294 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:08Z","lastTransitionTime":"2026-02-18T11:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.187221 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.187252 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.187275 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.187289 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.187298 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:08Z","lastTransitionTime":"2026-02-18T11:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.289127 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.289185 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.289193 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.289206 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.289223 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:08Z","lastTransitionTime":"2026-02-18T11:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.391425 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.391478 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.391493 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.391511 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.391521 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:08Z","lastTransitionTime":"2026-02-18T11:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.494098 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.494137 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.494150 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.494164 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.494177 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:08Z","lastTransitionTime":"2026-02-18T11:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.596497 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.596528 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.596537 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.596550 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.596559 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:08Z","lastTransitionTime":"2026-02-18T11:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.698992 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.699059 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.699068 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.699083 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.699093 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:08Z","lastTransitionTime":"2026-02-18T11:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.800992 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.801065 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.801080 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.801102 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.801113 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:08Z","lastTransitionTime":"2026-02-18T11:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.903571 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.903631 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.903655 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.903678 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.903692 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:08Z","lastTransitionTime":"2026-02-18T11:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:08 crc kubenswrapper[4717]: I0218 11:51:08.970634 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:51:07.761921472 +0000 UTC Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.006861 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.006931 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.006952 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.006975 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.006995 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:09Z","lastTransitionTime":"2026-02-18T11:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.036379 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:09 crc kubenswrapper[4717]: E0218 11:51:09.036669 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.109101 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.109152 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.109165 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.109183 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.109196 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:09Z","lastTransitionTime":"2026-02-18T11:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.211036 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.211083 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.211094 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.211109 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.211118 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:09Z","lastTransitionTime":"2026-02-18T11:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.313645 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.313771 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.313788 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.313844 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.313854 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:09Z","lastTransitionTime":"2026-02-18T11:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.418397 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.418704 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.418765 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.418849 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.418920 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:09Z","lastTransitionTime":"2026-02-18T11:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.521982 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.522034 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.522042 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.522057 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.522065 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:09Z","lastTransitionTime":"2026-02-18T11:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.624910 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.624967 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.624978 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.624996 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.625009 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:09Z","lastTransitionTime":"2026-02-18T11:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.727359 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.727419 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.727431 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.727449 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.727460 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:09Z","lastTransitionTime":"2026-02-18T11:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.830141 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.830513 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.830638 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.830759 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.830919 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:09Z","lastTransitionTime":"2026-02-18T11:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.933671 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.933711 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.933725 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.933741 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.933752 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:09Z","lastTransitionTime":"2026-02-18T11:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:09 crc kubenswrapper[4717]: I0218 11:51:09.971241 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:11:42.269618609 +0000 UTC Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.035538 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.035625 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.035949 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.036024 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:10 crc kubenswrapper[4717]: E0218 11:51:10.036036 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.036045 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.036105 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.036120 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.036130 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:10Z","lastTransitionTime":"2026-02-18T11:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:10 crc kubenswrapper[4717]: E0218 11:51:10.036186 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:10 crc kubenswrapper[4717]: E0218 11:51:10.036351 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.138137 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.138206 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.138223 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.138247 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.138293 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:10Z","lastTransitionTime":"2026-02-18T11:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.241200 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.241248 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.241276 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.241294 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.241306 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:10Z","lastTransitionTime":"2026-02-18T11:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.344003 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.344054 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.344066 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.344082 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.344097 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:10Z","lastTransitionTime":"2026-02-18T11:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.446743 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.447063 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.447153 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.447240 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.447482 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:10Z","lastTransitionTime":"2026-02-18T11:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.550937 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.550976 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.550988 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.551004 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.551017 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:10Z","lastTransitionTime":"2026-02-18T11:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.653608 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.653864 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.654062 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.654194 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.654345 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:10Z","lastTransitionTime":"2026-02-18T11:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.757113 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.757156 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.757168 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.757185 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.757198 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:10Z","lastTransitionTime":"2026-02-18T11:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.860159 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.860210 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.860220 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.860237 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.860250 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:10Z","lastTransitionTime":"2026-02-18T11:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.963146 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.963212 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.963227 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.963248 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.963289 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:10Z","lastTransitionTime":"2026-02-18T11:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:10 crc kubenswrapper[4717]: I0218 11:51:10.972519 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:07:24.185605226 +0000 UTC Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.036755 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:11 crc kubenswrapper[4717]: E0218 11:51:11.036912 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.066606 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.066991 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.067212 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.067474 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.067709 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:11Z","lastTransitionTime":"2026-02-18T11:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.170538 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.170876 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.171020 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.171148 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.171347 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:11Z","lastTransitionTime":"2026-02-18T11:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.273994 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.274030 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.274060 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.274078 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.274090 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:11Z","lastTransitionTime":"2026-02-18T11:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.376068 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.376131 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.376141 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.376153 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.376161 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:11Z","lastTransitionTime":"2026-02-18T11:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.478719 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.478966 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.479038 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.479130 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.479240 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:11Z","lastTransitionTime":"2026-02-18T11:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.581723 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.581767 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.581778 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.581793 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.581803 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:11Z","lastTransitionTime":"2026-02-18T11:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.684114 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.684169 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.684185 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.684198 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.684207 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:11Z","lastTransitionTime":"2026-02-18T11:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.786038 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.786074 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.786085 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.786102 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.786112 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:11Z","lastTransitionTime":"2026-02-18T11:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.888705 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.888751 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.888768 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.888786 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.888797 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:11Z","lastTransitionTime":"2026-02-18T11:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.973096 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:48:56.980750968 +0000 UTC Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.991336 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.991387 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.991398 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.991418 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:11 crc kubenswrapper[4717]: I0218 11:51:11.991428 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:11Z","lastTransitionTime":"2026-02-18T11:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.036532 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.036583 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:12 crc kubenswrapper[4717]: E0218 11:51:12.036648 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:12 crc kubenswrapper[4717]: E0218 11:51:12.037077 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.037157 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:12 crc kubenswrapper[4717]: E0218 11:51:12.037249 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.098545 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.098637 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.098947 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.099359 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.099389 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:12Z","lastTransitionTime":"2026-02-18T11:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.202832 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.202877 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.202890 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.202907 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.202916 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:12Z","lastTransitionTime":"2026-02-18T11:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.307199 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.307284 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.307301 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.307318 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.307331 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:12Z","lastTransitionTime":"2026-02-18T11:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.410067 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.410397 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.410502 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.410594 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.410658 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:12Z","lastTransitionTime":"2026-02-18T11:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.513096 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.513394 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.513529 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.513626 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.513691 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:12Z","lastTransitionTime":"2026-02-18T11:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.616468 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.616519 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.616531 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.616580 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.616595 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:12Z","lastTransitionTime":"2026-02-18T11:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.719228 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.719326 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.719337 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.719354 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.719365 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:12Z","lastTransitionTime":"2026-02-18T11:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.821609 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.821680 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.821692 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.821706 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.821715 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:12Z","lastTransitionTime":"2026-02-18T11:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.923667 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.923704 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.923714 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.923729 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.923737 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:12Z","lastTransitionTime":"2026-02-18T11:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:12 crc kubenswrapper[4717]: I0218 11:51:12.973519 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:20:37.775871411 +0000 UTC Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.026028 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.026301 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.026442 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.026546 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.026668 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:13Z","lastTransitionTime":"2026-02-18T11:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.035470 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:13 crc kubenswrapper[4717]: E0218 11:51:13.035593 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.129454 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.129495 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.129509 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.129525 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.129536 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:13Z","lastTransitionTime":"2026-02-18T11:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.231921 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.232000 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.232021 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.232038 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.232049 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:13Z","lastTransitionTime":"2026-02-18T11:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.334022 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.334067 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.334079 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.334096 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.334108 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:13Z","lastTransitionTime":"2026-02-18T11:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.436658 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.436701 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.436714 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.436730 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.436742 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:13Z","lastTransitionTime":"2026-02-18T11:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.538738 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.538843 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.538854 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.538867 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.538877 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:13Z","lastTransitionTime":"2026-02-18T11:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.640697 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.640724 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.640732 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.640745 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.640753 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:13Z","lastTransitionTime":"2026-02-18T11:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.743223 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.743279 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.743291 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.743306 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.743317 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:13Z","lastTransitionTime":"2026-02-18T11:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.845435 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.845468 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.845479 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.845494 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.845506 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:13Z","lastTransitionTime":"2026-02-18T11:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.948376 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.948421 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.948433 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.948450 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.948462 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:13Z","lastTransitionTime":"2026-02-18T11:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:13 crc kubenswrapper[4717]: I0218 11:51:13.975001 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:22:13.062809722 +0000 UTC Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.036043 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.036105 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.036070 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:14 crc kubenswrapper[4717]: E0218 11:51:14.036213 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:14 crc kubenswrapper[4717]: E0218 11:51:14.036553 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:14 crc kubenswrapper[4717]: E0218 11:51:14.036609 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.037351 4717 scope.go:117] "RemoveContainer" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:51:14 crc kubenswrapper[4717]: E0218 11:51:14.037526 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2fh5s_openshift-ovn-kubernetes(26c6bcf7-2c2a-41bf-b76c-4f040f5693f6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.050942 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.050984 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.050993 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.051007 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.051018 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:14Z","lastTransitionTime":"2026-02-18T11:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.153825 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.153875 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.153887 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.153904 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.153915 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:14Z","lastTransitionTime":"2026-02-18T11:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.256529 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.256568 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.256580 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.256595 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.256608 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:14Z","lastTransitionTime":"2026-02-18T11:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.360072 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.360126 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.360135 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.360151 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.360161 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:14Z","lastTransitionTime":"2026-02-18T11:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.462291 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.462580 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.462645 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.462719 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.462783 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:14Z","lastTransitionTime":"2026-02-18T11:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.496631 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.496713 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.496723 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.496737 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.496747 4717 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T11:51:14Z","lastTransitionTime":"2026-02-18T11:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.541910 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp"] Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.542466 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: W0218 11:51:14.545158 4717 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Feb 18 11:51:14 crc kubenswrapper[4717]: E0218 11:51:14.545208 4717 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.545348 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 11:51:14 crc kubenswrapper[4717]: W0218 11:51:14.545471 4717 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Feb 18 11:51:14 crc kubenswrapper[4717]: E0218 11:51:14.545525 4717 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 11:51:14 crc kubenswrapper[4717]: W0218 11:51:14.545647 4717 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: secrets "cluster-version-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Feb 18 11:51:14 crc kubenswrapper[4717]: E0218 11:51:14.545733 4717 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-version-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.557184 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nqztm" podStartSLOduration=88.55716846 podStartE2EDuration="1m28.55716846s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.556985825 +0000 UTC m=+108.959087141" watchObservedRunningTime="2026-02-18 11:51:14.55716846 +0000 UTC m=+108.959269786" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.591309 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.591287462 podStartE2EDuration="1m29.591287462s" podCreationTimestamp="2026-02-18 11:49:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.589501241 +0000 UTC m=+108.991602557" watchObservedRunningTime="2026-02-18 11:51:14.591287462 +0000 UTC m=+108.993388788" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.614292 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.61427102 podStartE2EDuration="1m0.61427102s" podCreationTimestamp="2026-02-18 11:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.602091696 +0000 UTC m=+109.004193002" watchObservedRunningTime="2026-02-18 11:51:14.61427102 +0000 UTC m=+109.016372336" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.631089 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.631068793 podStartE2EDuration="37.631068793s" podCreationTimestamp="2026-02-18 11:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.614600899 +0000 UTC m=+109.016702215" watchObservedRunningTime="2026-02-18 11:51:14.631068793 +0000 UTC m=+109.033170109" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.643351 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.643330889 podStartE2EDuration="1m28.643330889s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.631602598 +0000 UTC m=+109.033703914" watchObservedRunningTime="2026-02-18 11:51:14.643330889 +0000 UTC m=+109.045432215" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.647090 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.647139 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.647167 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.647197 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.647229 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.691302 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xmb4p" podStartSLOduration=88.691288881 podStartE2EDuration="1m28.691288881s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.678939482 +0000 UTC m=+109.081040818" watchObservedRunningTime="2026-02-18 11:51:14.691288881 +0000 UTC m=+109.093390197" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.691378 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podStartSLOduration=88.691374653 podStartE2EDuration="1m28.691374653s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.691170317 +0000 UTC m=+109.093271633" watchObservedRunningTime="2026-02-18 11:51:14.691374653 +0000 UTC m=+109.093475969" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.707036 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-s242q" podStartSLOduration=88.707009024 podStartE2EDuration="1m28.707009024s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.706369886 +0000 UTC m=+109.108471202" watchObservedRunningTime="2026-02-18 11:51:14.707009024 +0000 UTC m=+109.109110340" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.747998 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.748041 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.748072 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.748112 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.748128 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.748183 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.748188 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.806410 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=28.806387085 podStartE2EDuration="28.806387085s" podCreationTimestamp="2026-02-18 11:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.791383202 +0000 UTC m=+109.193484538" watchObservedRunningTime="2026-02-18 11:51:14.806387085 +0000 UTC m=+109.208488401" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.819860 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hvktx" podStartSLOduration=88.819844095 podStartE2EDuration="1m28.819844095s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.807758214 +0000 UTC m=+109.209859530" watchObservedRunningTime="2026-02-18 11:51:14.819844095 +0000 UTC m=+109.221945411" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.829568 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4dzfm" podStartSLOduration=88.829555988 podStartE2EDuration="1m28.829555988s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:14.829129156 +0000 UTC m=+109.231230472" watchObservedRunningTime="2026-02-18 11:51:14.829555988 +0000 UTC m=+109.231657304" Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.975218 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:35:40.576949093 +0000 UTC Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.975294 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 11:51:14 crc kubenswrapper[4717]: I0218 11:51:14.982351 4717 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 11:51:15 crc kubenswrapper[4717]: I0218 11:51:15.035803 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:15 crc kubenswrapper[4717]: E0218 11:51:15.036084 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:15 crc kubenswrapper[4717]: I0218 11:51:15.447251 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 11:51:15 crc kubenswrapper[4717]: I0218 11:51:15.453877 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 11:51:15 crc kubenswrapper[4717]: I0218 11:51:15.454137 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:15 crc kubenswrapper[4717]: I0218 11:51:15.460050 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:15 crc kubenswrapper[4717]: E0218 11:51:15.763759 4717 projected.go:288] Couldn't get configMap openshift-cluster-version/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 18 11:51:15 crc kubenswrapper[4717]: E0218 11:51:15.763795 4717 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp: failed to sync configmap cache: timed out waiting for the condition Feb 18 11:51:15 crc kubenswrapper[4717]: E0218 11:51:15.763851 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-kube-api-access podName:d8b6e005-cab5-4352-b737-f4d0b8c3eef0 nodeName:}" failed. No retries permitted until 2026-02-18 11:51:16.263834275 +0000 UTC m=+110.665935591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-kube-api-access") pod "cluster-version-operator-5c965bbfc6-w2dlp" (UID: "d8b6e005-cab5-4352-b737-f4d0b8c3eef0") : failed to sync configmap cache: timed out waiting for the condition Feb 18 11:51:15 crc kubenswrapper[4717]: I0218 11:51:15.985814 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 11:51:16 crc kubenswrapper[4717]: I0218 11:51:16.035907 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:16 crc kubenswrapper[4717]: I0218 11:51:16.036021 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:16 crc kubenswrapper[4717]: E0218 11:51:16.036070 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:16 crc kubenswrapper[4717]: I0218 11:51:16.036097 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:16 crc kubenswrapper[4717]: E0218 11:51:16.036203 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:16 crc kubenswrapper[4717]: E0218 11:51:16.036318 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:16 crc kubenswrapper[4717]: I0218 11:51:16.362375 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:16 crc kubenswrapper[4717]: I0218 11:51:16.367136 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d8b6e005-cab5-4352-b737-f4d0b8c3eef0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-w2dlp\" (UID: \"d8b6e005-cab5-4352-b737-f4d0b8c3eef0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:16 crc kubenswrapper[4717]: I0218 11:51:16.658560 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" Feb 18 11:51:17 crc kubenswrapper[4717]: I0218 11:51:17.035540 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:17 crc kubenswrapper[4717]: E0218 11:51:17.036463 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:17 crc kubenswrapper[4717]: I0218 11:51:17.660182 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" event={"ID":"d8b6e005-cab5-4352-b737-f4d0b8c3eef0","Type":"ContainerStarted","Data":"56e256d65da2f2bde814a232b057411e44879f2210830b719d44551c79eb9ac5"} Feb 18 11:51:17 crc kubenswrapper[4717]: I0218 11:51:17.660229 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" event={"ID":"d8b6e005-cab5-4352-b737-f4d0b8c3eef0","Type":"ContainerStarted","Data":"1c41431f9efb273b5e955f1bb6e654a904955ebbcfb005454d08f3cd1250d54b"} Feb 18 11:51:17 crc kubenswrapper[4717]: I0218 11:51:17.676834 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-w2dlp" podStartSLOduration=91.676815221 podStartE2EDuration="1m31.676815221s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:17.676665687 +0000 UTC m=+112.078767003" watchObservedRunningTime="2026-02-18 11:51:17.676815221 +0000 UTC m=+112.078916537" Feb 18 11:51:18 crc kubenswrapper[4717]: I0218 11:51:18.035844 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:18 crc kubenswrapper[4717]: I0218 11:51:18.035873 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:18 crc kubenswrapper[4717]: I0218 11:51:18.035911 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:18 crc kubenswrapper[4717]: E0218 11:51:18.035997 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:18 crc kubenswrapper[4717]: E0218 11:51:18.036113 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:18 crc kubenswrapper[4717]: E0218 11:51:18.036205 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:19 crc kubenswrapper[4717]: I0218 11:51:19.036069 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:19 crc kubenswrapper[4717]: E0218 11:51:19.036536 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:20 crc kubenswrapper[4717]: I0218 11:51:20.036228 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:20 crc kubenswrapper[4717]: I0218 11:51:20.036335 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:20 crc kubenswrapper[4717]: E0218 11:51:20.036829 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:20 crc kubenswrapper[4717]: I0218 11:51:20.036360 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:20 crc kubenswrapper[4717]: E0218 11:51:20.036929 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:20 crc kubenswrapper[4717]: E0218 11:51:20.036700 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:21 crc kubenswrapper[4717]: I0218 11:51:21.036015 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:21 crc kubenswrapper[4717]: E0218 11:51:21.036221 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:21 crc kubenswrapper[4717]: I0218 11:51:21.673125 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hvktx_41f72a5f-4820-4dc2-a6c5-243550881aaf/kube-multus/1.log" Feb 18 11:51:21 crc kubenswrapper[4717]: I0218 11:51:21.673617 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hvktx_41f72a5f-4820-4dc2-a6c5-243550881aaf/kube-multus/0.log" Feb 18 11:51:21 crc kubenswrapper[4717]: I0218 11:51:21.673673 4717 generic.go:334] "Generic (PLEG): container finished" podID="41f72a5f-4820-4dc2-a6c5-243550881aaf" containerID="bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579" exitCode=1 Feb 18 11:51:21 crc kubenswrapper[4717]: I0218 11:51:21.673711 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hvktx" event={"ID":"41f72a5f-4820-4dc2-a6c5-243550881aaf","Type":"ContainerDied","Data":"bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579"} Feb 18 11:51:21 crc kubenswrapper[4717]: I0218 11:51:21.673763 4717 scope.go:117] "RemoveContainer" containerID="74aa2a225ba93880304b0f8febc2c364f764d25f139c9e30b6ad0aae422ab228" Feb 18 11:51:21 crc kubenswrapper[4717]: I0218 11:51:21.677737 4717 scope.go:117] "RemoveContainer" containerID="bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579" Feb 18 11:51:21 crc kubenswrapper[4717]: E0218 11:51:21.677995 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hvktx_openshift-multus(41f72a5f-4820-4dc2-a6c5-243550881aaf)\"" pod="openshift-multus/multus-hvktx" podUID="41f72a5f-4820-4dc2-a6c5-243550881aaf" Feb 18 11:51:22 crc kubenswrapper[4717]: I0218 11:51:22.035638 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:22 crc kubenswrapper[4717]: I0218 11:51:22.035834 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:22 crc kubenswrapper[4717]: E0218 11:51:22.035978 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:22 crc kubenswrapper[4717]: I0218 11:51:22.036058 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:22 crc kubenswrapper[4717]: E0218 11:51:22.036160 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:22 crc kubenswrapper[4717]: E0218 11:51:22.036209 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:22 crc kubenswrapper[4717]: I0218 11:51:22.678298 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hvktx_41f72a5f-4820-4dc2-a6c5-243550881aaf/kube-multus/1.log" Feb 18 11:51:23 crc kubenswrapper[4717]: I0218 11:51:23.036046 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:23 crc kubenswrapper[4717]: E0218 11:51:23.036463 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:24 crc kubenswrapper[4717]: I0218 11:51:24.035719 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:24 crc kubenswrapper[4717]: I0218 11:51:24.035719 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:24 crc kubenswrapper[4717]: I0218 11:51:24.035869 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:24 crc kubenswrapper[4717]: E0218 11:51:24.035988 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:24 crc kubenswrapper[4717]: E0218 11:51:24.036049 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:24 crc kubenswrapper[4717]: E0218 11:51:24.036141 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:25 crc kubenswrapper[4717]: I0218 11:51:25.036522 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:25 crc kubenswrapper[4717]: E0218 11:51:25.036649 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:26 crc kubenswrapper[4717]: I0218 11:51:26.036226 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:26 crc kubenswrapper[4717]: I0218 11:51:26.036316 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:26 crc kubenswrapper[4717]: E0218 11:51:26.036385 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:26 crc kubenswrapper[4717]: I0218 11:51:26.036228 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:26 crc kubenswrapper[4717]: E0218 11:51:26.036446 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:26 crc kubenswrapper[4717]: E0218 11:51:26.036573 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:26 crc kubenswrapper[4717]: E0218 11:51:26.978241 4717 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 11:51:27 crc kubenswrapper[4717]: I0218 11:51:27.036014 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:27 crc kubenswrapper[4717]: E0218 11:51:27.037424 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:27 crc kubenswrapper[4717]: E0218 11:51:27.160244 4717 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 11:51:28 crc kubenswrapper[4717]: I0218 11:51:28.036589 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:28 crc kubenswrapper[4717]: I0218 11:51:28.036921 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:28 crc kubenswrapper[4717]: I0218 11:51:28.037307 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:28 crc kubenswrapper[4717]: E0218 11:51:28.037620 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:28 crc kubenswrapper[4717]: E0218 11:51:28.037830 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:28 crc kubenswrapper[4717]: E0218 11:51:28.037888 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:28 crc kubenswrapper[4717]: I0218 11:51:28.038158 4717 scope.go:117] "RemoveContainer" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:51:28 crc kubenswrapper[4717]: I0218 11:51:28.698029 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/3.log" Feb 18 11:51:28 crc kubenswrapper[4717]: I0218 11:51:28.700240 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerStarted","Data":"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade"} Feb 18 11:51:28 crc kubenswrapper[4717]: I0218 11:51:28.700693 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:51:28 crc kubenswrapper[4717]: I0218 11:51:28.725430 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podStartSLOduration=102.725392064 podStartE2EDuration="1m42.725392064s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:28.725391584 +0000 UTC m=+123.127492920" watchObservedRunningTime="2026-02-18 11:51:28.725392064 +0000 UTC m=+123.127493380" Feb 18 11:51:29 crc kubenswrapper[4717]: I0218 11:51:29.036276 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:29 crc kubenswrapper[4717]: E0218 11:51:29.036436 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:29 crc kubenswrapper[4717]: I0218 11:51:29.091852 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gxzpl"] Feb 18 11:51:29 crc kubenswrapper[4717]: I0218 11:51:29.091990 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:29 crc kubenswrapper[4717]: E0218 11:51:29.092169 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:30 crc kubenswrapper[4717]: I0218 11:51:30.035695 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:30 crc kubenswrapper[4717]: E0218 11:51:30.035873 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:30 crc kubenswrapper[4717]: I0218 11:51:30.036259 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:30 crc kubenswrapper[4717]: E0218 11:51:30.036415 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:31 crc kubenswrapper[4717]: I0218 11:51:31.035668 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:31 crc kubenswrapper[4717]: I0218 11:51:31.035741 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:31 crc kubenswrapper[4717]: E0218 11:51:31.035838 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:31 crc kubenswrapper[4717]: E0218 11:51:31.036049 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:32 crc kubenswrapper[4717]: I0218 11:51:32.035388 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:32 crc kubenswrapper[4717]: E0218 11:51:32.035870 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:32 crc kubenswrapper[4717]: I0218 11:51:32.035503 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:32 crc kubenswrapper[4717]: E0218 11:51:32.036537 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:32 crc kubenswrapper[4717]: E0218 11:51:32.161432 4717 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 11:51:33 crc kubenswrapper[4717]: I0218 11:51:33.036584 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:33 crc kubenswrapper[4717]: I0218 11:51:33.036621 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:33 crc kubenswrapper[4717]: E0218 11:51:33.036770 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:33 crc kubenswrapper[4717]: E0218 11:51:33.036973 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:33 crc kubenswrapper[4717]: I0218 11:51:33.685197 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:51:34 crc kubenswrapper[4717]: I0218 11:51:34.035433 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:34 crc kubenswrapper[4717]: E0218 11:51:34.035604 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:34 crc kubenswrapper[4717]: I0218 11:51:34.035791 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:34 crc kubenswrapper[4717]: E0218 11:51:34.035918 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:35 crc kubenswrapper[4717]: I0218 11:51:35.035784 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:35 crc kubenswrapper[4717]: I0218 11:51:35.035914 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:35 crc kubenswrapper[4717]: E0218 11:51:35.035960 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:35 crc kubenswrapper[4717]: E0218 11:51:35.036071 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:36 crc kubenswrapper[4717]: I0218 11:51:36.036124 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:36 crc kubenswrapper[4717]: E0218 11:51:36.036241 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:36 crc kubenswrapper[4717]: I0218 11:51:36.036124 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:36 crc kubenswrapper[4717]: E0218 11:51:36.036480 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:37 crc kubenswrapper[4717]: I0218 11:51:37.036384 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:37 crc kubenswrapper[4717]: I0218 11:51:37.037419 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:37 crc kubenswrapper[4717]: E0218 11:51:37.037487 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:37 crc kubenswrapper[4717]: E0218 11:51:37.037528 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:37 crc kubenswrapper[4717]: I0218 11:51:37.037662 4717 scope.go:117] "RemoveContainer" containerID="bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579" Feb 18 11:51:37 crc kubenswrapper[4717]: E0218 11:51:37.162134 4717 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 11:51:37 crc kubenswrapper[4717]: I0218 11:51:37.728078 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hvktx_41f72a5f-4820-4dc2-a6c5-243550881aaf/kube-multus/1.log" Feb 18 11:51:37 crc kubenswrapper[4717]: I0218 11:51:37.728122 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hvktx" event={"ID":"41f72a5f-4820-4dc2-a6c5-243550881aaf","Type":"ContainerStarted","Data":"99bf552605d0eed937374d50a80b61ae031663c7b68d2cb0435c94aa2c2469cd"} Feb 18 11:51:38 crc kubenswrapper[4717]: I0218 11:51:38.036464 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:38 crc kubenswrapper[4717]: I0218 11:51:38.036527 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:38 crc kubenswrapper[4717]: E0218 11:51:38.036624 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:38 crc kubenswrapper[4717]: E0218 11:51:38.036772 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:39 crc kubenswrapper[4717]: I0218 11:51:39.036011 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:39 crc kubenswrapper[4717]: I0218 11:51:39.036106 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:39 crc kubenswrapper[4717]: E0218 11:51:39.036145 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:39 crc kubenswrapper[4717]: E0218 11:51:39.036252 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:40 crc kubenswrapper[4717]: I0218 11:51:40.035593 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:40 crc kubenswrapper[4717]: I0218 11:51:40.035603 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:40 crc kubenswrapper[4717]: E0218 11:51:40.036385 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:40 crc kubenswrapper[4717]: E0218 11:51:40.036500 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:41 crc kubenswrapper[4717]: I0218 11:51:41.036089 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:41 crc kubenswrapper[4717]: I0218 11:51:41.036212 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:41 crc kubenswrapper[4717]: E0218 11:51:41.036291 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gxzpl" podUID="a549f413-5b44-4fac-a21e-4f41cc30fbe6" Feb 18 11:51:41 crc kubenswrapper[4717]: E0218 11:51:41.036443 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 11:51:42 crc kubenswrapper[4717]: I0218 11:51:42.035645 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:42 crc kubenswrapper[4717]: E0218 11:51:42.035851 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 11:51:42 crc kubenswrapper[4717]: I0218 11:51:42.035886 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:42 crc kubenswrapper[4717]: E0218 11:51:42.036132 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 11:51:43 crc kubenswrapper[4717]: I0218 11:51:43.036485 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:51:43 crc kubenswrapper[4717]: I0218 11:51:43.036604 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:43 crc kubenswrapper[4717]: I0218 11:51:43.039281 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 11:51:43 crc kubenswrapper[4717]: I0218 11:51:43.040159 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 11:51:43 crc kubenswrapper[4717]: I0218 11:51:43.040702 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 11:51:43 crc kubenswrapper[4717]: I0218 11:51:43.042395 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 11:51:44 crc kubenswrapper[4717]: I0218 11:51:44.036344 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:44 crc kubenswrapper[4717]: I0218 11:51:44.036585 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:44 crc kubenswrapper[4717]: I0218 11:51:44.038484 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 11:51:44 crc kubenswrapper[4717]: I0218 11:51:44.038987 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.249978 4717 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.279095 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z4csx"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.279849 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.280073 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.280850 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.282009 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kzbdk"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.282798 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.283436 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.283798 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.284517 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-svr2x"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.284862 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.285523 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.285983 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.286090 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.287371 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2xjnq"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.287759 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.288859 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.297103 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.302216 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.305955 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.305972 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.306230 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.306301 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.306486 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.306579 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.307266 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.307292 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.307270 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.307664 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.307712 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.307951 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.307997 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.308642 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.308880 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.308902 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.311128 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.312389 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.312801 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.313146 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.313233 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.313327 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.314123 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.314521 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.314768 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.314815 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.314861 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.314967 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.315137 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.315192 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.315339 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.315637 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.315959 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.313151 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.313156 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.331811 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.331987 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.331817 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.332407 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.333012 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.333094 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.333227 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.336166 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.336267 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.336453 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.336640 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.336779 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.336868 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.337209 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.336885 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.336784 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-s9dzs"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.339415 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.339426 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.339694 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.339692 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.340374 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.341027 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.341576 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-mk6cn"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.341874 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.342140 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tph6g"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.342565 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.342611 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.342916 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sgvs4"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.343229 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.343314 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.343388 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.343781 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.343794 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-s9dzs" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.344006 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.344961 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.345538 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.346364 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.347775 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.347962 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.348077 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.348207 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.349110 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.349379 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.352182 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bxl4l"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.354773 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.355022 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.355324 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.355446 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.355574 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.355768 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.355875 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.355967 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.356074 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.356207 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.356357 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.356479 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.356718 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kzbdk"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.356758 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nj9hx"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.356734 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.357309 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.357545 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.357681 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.357894 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459ee658-4b95-447e-9356-393ff12613b5-config\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.357929 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-client-ca\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.357954 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c88dca8-b239-4d98-b56a-3df7b296e4e7-etcd-client\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.357975 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s96p\" (UniqueName: \"kubernetes.io/projected/ff0e32a4-5a0f-4779-a1db-b67aec04f414-kube-api-access-8s96p\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.357995 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv7sq\" (UniqueName: \"kubernetes.io/projected/0546320f-3929-4452-a505-bdbb872741ad-kube-api-access-vv7sq\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358012 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0546320f-3929-4452-a505-bdbb872741ad-encryption-config\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358027 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0e32a4-5a0f-4779-a1db-b67aec04f414-serving-cert\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358041 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-config\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358056 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvhmp\" (UniqueName: \"kubernetes.io/projected/0dc8b28c-8f88-4497-a439-2f1500cda5c2-kube-api-access-qvhmp\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358076 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/459ee658-4b95-447e-9356-393ff12613b5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358097 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/459ee658-4b95-447e-9356-393ff12613b5-service-ca-bundle\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358115 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-etcd-serving-ca\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358131 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c88dca8-b239-4d98-b56a-3df7b296e4e7-encryption-config\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358162 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8m9r\" (UniqueName: \"kubernetes.io/projected/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-kube-api-access-q8m9r\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358181 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvgk2\" (UniqueName: \"kubernetes.io/projected/6c88dca8-b239-4d98-b56a-3df7b296e4e7-kube-api-access-pvgk2\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358202 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72bjr\" (UniqueName: \"kubernetes.io/projected/459ee658-4b95-447e-9356-393ff12613b5-kube-api-access-72bjr\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358219 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c88dca8-b239-4d98-b56a-3df7b296e4e7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358235 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-config\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358283 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn75r\" (UniqueName: \"kubernetes.io/projected/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-kube-api-access-kn75r\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358308 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-config\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358341 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0546320f-3929-4452-a505-bdbb872741ad-etcd-client\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358363 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c88dca8-b239-4d98-b56a-3df7b296e4e7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358386 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-config\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358404 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-images\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358428 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358450 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-audit\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358469 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dc8b28c-8f88-4497-a439-2f1500cda5c2-auth-proxy-config\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358490 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/459ee658-4b95-447e-9356-393ff12613b5-serving-cert\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358510 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-image-import-ca\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358535 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0546320f-3929-4452-a505-bdbb872741ad-audit-dir\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358556 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358575 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0546320f-3929-4452-a505-bdbb872741ad-serving-cert\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358598 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0dc8b28c-8f88-4497-a439-2f1500cda5c2-machine-approver-tls\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358619 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0546320f-3929-4452-a505-bdbb872741ad-node-pullsecrets\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358643 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c88dca8-b239-4d98-b56a-3df7b296e4e7-audit-dir\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358668 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c88dca8-b239-4d98-b56a-3df7b296e4e7-serving-cert\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358687 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc8b28c-8f88-4497-a439-2f1500cda5c2-config\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358707 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c88dca8-b239-4d98-b56a-3df7b296e4e7-audit-policies\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358736 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-client-ca\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358756 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.358776 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-serving-cert\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.356760 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.361955 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.362059 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.362180 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.362415 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.362496 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.362689 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.362996 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.363108 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.363194 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.363001 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.363664 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.365132 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.365340 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.363057 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.366222 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.384345 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.384945 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.385388 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.385425 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.385565 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.385779 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.385821 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.385917 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rzqs7"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.385939 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.386044 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.387049 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.387763 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.386044 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.388213 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.387960 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.390079 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.390101 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.392708 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.394540 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.407010 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.407559 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dnv67"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.407688 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.407765 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.408090 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.408246 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.410185 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.410523 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.410535 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.411210 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.411304 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.412668 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4gqhf"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.413573 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.413862 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.414378 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.416342 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.416853 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.417076 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.417290 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.421485 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.422179 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.425653 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sh4qc"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.426177 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.426492 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.430025 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z28q6"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.430575 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.430872 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.432084 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.432233 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.432431 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.433507 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtw2w"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.434332 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.434764 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.439270 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mk6cn"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.440614 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.441914 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.447673 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sgvs4"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.448802 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.449426 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.449846 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.451053 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-svr2x"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.452606 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.454187 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tph6g"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.455028 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.456222 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z4csx"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.459112 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-zss8p"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461138 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dc8b28c-8f88-4497-a439-2f1500cda5c2-auth-proxy-config\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461186 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq6jp\" (UniqueName: \"kubernetes.io/projected/cc067d60-aa98-4651-aad2-4d9dd7ee2683-kube-api-access-gq6jp\") pod \"cluster-samples-operator-665b6dd947-7stbl\" (UID: \"cc067d60-aa98-4651-aad2-4d9dd7ee2683\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461218 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/459ee658-4b95-447e-9356-393ff12613b5-serving-cert\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461242 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-image-import-ca\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461282 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f25x\" (UniqueName: \"kubernetes.io/projected/0e8734b5-4294-4091-b377-680aa4178a19-kube-api-access-9f25x\") pod \"downloads-7954f5f757-s9dzs\" (UID: \"0e8734b5-4294-4091-b377-680aa4178a19\") " pod="openshift-console/downloads-7954f5f757-s9dzs" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461310 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-oauth-config\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461334 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvjlv\" (UniqueName: \"kubernetes.io/projected/41932c5e-1add-49b1-8876-43ee2e9a4a91-kube-api-access-dvjlv\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461358 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5029ef8a-c6f0-43e4-b25c-d2f695020357-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qrvwl\" (UID: \"5029ef8a-c6f0-43e4-b25c-d2f695020357\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461392 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461569 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0546320f-3929-4452-a505-bdbb872741ad-audit-dir\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461615 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-oauth-serving-cert\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461660 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0546320f-3929-4452-a505-bdbb872741ad-audit-dir\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461672 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk9qt\" (UniqueName: \"kubernetes.io/projected/41f921ae-dc74-4acf-a699-a5a5e574224d-kube-api-access-jk9qt\") pod \"openshift-controller-manager-operator-756b6f6bc6-h5czz\" (UID: \"41f921ae-dc74-4acf-a699-a5a5e574224d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461861 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0dc8b28c-8f88-4497-a439-2f1500cda5c2-machine-approver-tls\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461890 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0546320f-3929-4452-a505-bdbb872741ad-serving-cert\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461912 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-console-config\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461933 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7h5xf\" (UID: \"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461955 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41932c5e-1add-49b1-8876-43ee2e9a4a91-etcd-ca\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.461980 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0546320f-3929-4452-a505-bdbb872741ad-node-pullsecrets\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462001 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c88dca8-b239-4d98-b56a-3df7b296e4e7-audit-dir\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462022 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7b5de2b3-a81b-458d-a061-e6f814de897b-metrics-tls\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462051 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e43c4105-3759-46ee-bde3-191e4ce3c318-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462074 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c88dca8-b239-4d98-b56a-3df7b296e4e7-serving-cert\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462094 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgtm\" (UniqueName: \"kubernetes.io/projected/9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b-kube-api-access-kmgtm\") pod \"openshift-apiserver-operator-796bbdcf4f-7h5xf\" (UID: \"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462117 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc8b28c-8f88-4497-a439-2f1500cda5c2-config\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462140 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c88dca8-b239-4d98-b56a-3df7b296e4e7-audit-policies\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462163 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47lzq\" (UniqueName: \"kubernetes.io/projected/384180d0-d0ee-41ed-bf82-b19b416e5972-kube-api-access-47lzq\") pod \"dns-operator-744455d44c-tph6g\" (UID: \"384180d0-d0ee-41ed-bf82-b19b416e5972\") " pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462179 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0546320f-3929-4452-a505-bdbb872741ad-node-pullsecrets\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462231 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c88dca8-b239-4d98-b56a-3df7b296e4e7-audit-dir\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462319 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462385 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-image-import-ca\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462861 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462138 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dc8b28c-8f88-4497-a439-2f1500cda5c2-auth-proxy-config\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.462185 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b5de2b3-a81b-458d-a061-e6f814de897b-trusted-ca\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466366 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-client-ca\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466413 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466454 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f921ae-dc74-4acf-a699-a5a5e574224d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h5czz\" (UID: \"41f921ae-dc74-4acf-a699-a5a5e574224d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466501 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-serving-cert\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466532 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-trusted-ca-bundle\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466562 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-serving-cert\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466591 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41932c5e-1add-49b1-8876-43ee2e9a4a91-etcd-client\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466628 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459ee658-4b95-447e-9356-393ff12613b5-config\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466658 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7935a1e-ca2c-4dcb-87cf-c0269819a682-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xjlqq\" (UID: \"e7935a1e-ca2c-4dcb-87cf-c0269819a682\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466691 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-client-ca\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466724 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c88dca8-b239-4d98-b56a-3df7b296e4e7-etcd-client\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466756 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s96p\" (UniqueName: \"kubernetes.io/projected/ff0e32a4-5a0f-4779-a1db-b67aec04f414-kube-api-access-8s96p\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466789 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41932c5e-1add-49b1-8876-43ee2e9a4a91-serving-cert\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466815 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcckb\" (UniqueName: \"kubernetes.io/projected/9c48e9b8-f48a-48d9-921a-8274c0cb430a-kube-api-access-rcckb\") pod \"kube-storage-version-migrator-operator-b67b599dd-br9vq\" (UID: \"9c48e9b8-f48a-48d9-921a-8274c0cb430a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466851 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttck\" (UniqueName: \"kubernetes.io/projected/3c042f74-11a5-46a9-bc05-6b3278428e36-kube-api-access-8ttck\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466884 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e7935a1e-ca2c-4dcb-87cf-c0269819a682-proxy-tls\") pod \"machine-config-controller-84d6567774-xjlqq\" (UID: \"e7935a1e-ca2c-4dcb-87cf-c0269819a682\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466917 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv7sq\" (UniqueName: \"kubernetes.io/projected/0546320f-3929-4452-a505-bdbb872741ad-kube-api-access-vv7sq\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466948 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/384180d0-d0ee-41ed-bf82-b19b416e5972-metrics-tls\") pod \"dns-operator-744455d44c-tph6g\" (UID: \"384180d0-d0ee-41ed-bf82-b19b416e5972\") " pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.466975 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e43c4105-3759-46ee-bde3-191e4ce3c318-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467006 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f921ae-dc74-4acf-a699-a5a5e574224d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h5czz\" (UID: \"41f921ae-dc74-4acf-a699-a5a5e574224d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467041 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0546320f-3929-4452-a505-bdbb872741ad-encryption-config\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467072 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0e32a4-5a0f-4779-a1db-b67aec04f414-serving-cert\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467096 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-config\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467125 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2z7c\" (UniqueName: \"kubernetes.io/projected/7b5de2b3-a81b-458d-a061-e6f814de897b-kube-api-access-b2z7c\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467167 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dph8j\" (UniqueName: \"kubernetes.io/projected/e43c4105-3759-46ee-bde3-191e4ce3c318-kube-api-access-dph8j\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467199 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/459ee658-4b95-447e-9356-393ff12613b5-service-ca-bundle\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467235 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvhmp\" (UniqueName: \"kubernetes.io/projected/0dc8b28c-8f88-4497-a439-2f1500cda5c2-kube-api-access-qvhmp\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467277 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/459ee658-4b95-447e-9356-393ff12613b5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467304 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-etcd-serving-ca\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467333 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5029ef8a-c6f0-43e4-b25c-d2f695020357-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qrvwl\" (UID: \"5029ef8a-c6f0-43e4-b25c-d2f695020357\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467381 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c88dca8-b239-4d98-b56a-3df7b296e4e7-encryption-config\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467411 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b5de2b3-a81b-458d-a061-e6f814de897b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467449 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41932c5e-1add-49b1-8876-43ee2e9a4a91-config\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467484 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8m9r\" (UniqueName: \"kubernetes.io/projected/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-kube-api-access-q8m9r\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467509 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvgk2\" (UniqueName: \"kubernetes.io/projected/6c88dca8-b239-4d98-b56a-3df7b296e4e7-kube-api-access-pvgk2\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467541 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e43c4105-3759-46ee-bde3-191e4ce3c318-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467571 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-service-ca\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467628 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72bjr\" (UniqueName: \"kubernetes.io/projected/459ee658-4b95-447e-9356-393ff12613b5-kube-api-access-72bjr\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467658 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c88dca8-b239-4d98-b56a-3df7b296e4e7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467691 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-config\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467720 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn75r\" (UniqueName: \"kubernetes.io/projected/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-kube-api-access-kn75r\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467788 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5029ef8a-c6f0-43e4-b25c-d2f695020357-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qrvwl\" (UID: \"5029ef8a-c6f0-43e4-b25c-d2f695020357\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467795 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0546320f-3929-4452-a505-bdbb872741ad-serving-cert\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467821 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41932c5e-1add-49b1-8876-43ee2e9a4a91-etcd-service-ca\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467851 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-config\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467870 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7h5xf\" (UID: \"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467909 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0546320f-3929-4452-a505-bdbb872741ad-etcd-client\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467932 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k4xq\" (UniqueName: \"kubernetes.io/projected/e7935a1e-ca2c-4dcb-87cf-c0269819a682-kube-api-access-8k4xq\") pod \"machine-config-controller-84d6567774-xjlqq\" (UID: \"e7935a1e-ca2c-4dcb-87cf-c0269819a682\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467955 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c88dca8-b239-4d98-b56a-3df7b296e4e7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467977 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c48e9b8-f48a-48d9-921a-8274c0cb430a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-br9vq\" (UID: \"9c48e9b8-f48a-48d9-921a-8274c0cb430a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.467998 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c48e9b8-f48a-48d9-921a-8274c0cb430a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-br9vq\" (UID: \"9c48e9b8-f48a-48d9-921a-8274c0cb430a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.468019 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-images\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.468042 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-config\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.468062 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.468081 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-audit\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.468100 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc067d60-aa98-4651-aad2-4d9dd7ee2683-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7stbl\" (UID: \"cc067d60-aa98-4651-aad2-4d9dd7ee2683\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.469293 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0dc8b28c-8f88-4497-a439-2f1500cda5c2-machine-approver-tls\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.469364 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-client-ca\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.469893 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-etcd-serving-ca\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.470354 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/459ee658-4b95-447e-9356-393ff12613b5-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.470708 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-config\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.471283 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c88dca8-b239-4d98-b56a-3df7b296e4e7-serving-cert\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.471328 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.471575 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c88dca8-b239-4d98-b56a-3df7b296e4e7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.471681 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc8b28c-8f88-4497-a439-2f1500cda5c2-config\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.472743 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6c88dca8-b239-4d98-b56a-3df7b296e4e7-encryption-config\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.473896 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0546320f-3929-4452-a505-bdbb872741ad-etcd-client\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.474504 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-config\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.474921 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-serving-cert\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.475572 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6c88dca8-b239-4d98-b56a-3df7b296e4e7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.476094 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/459ee658-4b95-447e-9356-393ff12613b5-service-ca-bundle\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.476808 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-config\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.477584 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-client-ca\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.477606 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-images\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.478108 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.478146 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rzqs7"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.478163 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-k2lck"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.478187 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0546320f-3929-4452-a505-bdbb872741ad-audit\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.478374 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459ee658-4b95-447e-9356-393ff12613b5-config\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.478519 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zss8p" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.478788 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c88dca8-b239-4d98-b56a-3df7b296e4e7-audit-policies\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.479033 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-config\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.479817 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0546320f-3929-4452-a505-bdbb872741ad-encryption-config\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.480432 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/459ee658-4b95-447e-9356-393ff12613b5-serving-cert\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.480609 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.481384 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0e32a4-5a0f-4779-a1db-b67aec04f414-serving-cert\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.481439 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.481857 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.481993 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c88dca8-b239-4d98-b56a-3df7b296e4e7-etcd-client\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.483075 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-s9dzs"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.485114 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.486779 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.487339 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.489272 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.490975 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.492294 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.493230 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2xjnq"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.494342 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4gqhf"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.495282 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.496277 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nj9hx"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.497561 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.498944 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.500046 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bxl4l"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.501371 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.501814 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.502389 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sh4qc"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.504629 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtw2w"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.506249 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.507652 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zlwvm"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.508814 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.510334 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z28q6"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.511582 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.513217 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zss8p"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.514987 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s5p7r"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.516391 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.516575 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.518927 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zlwvm"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.520289 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s5p7r"] Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.521505 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.541838 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.561833 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.568847 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41932c5e-1add-49b1-8876-43ee2e9a4a91-etcd-ca\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.568886 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7b5de2b3-a81b-458d-a061-e6f814de897b-metrics-tls\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.568910 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e43c4105-3759-46ee-bde3-191e4ce3c318-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.568935 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgtm\" (UniqueName: \"kubernetes.io/projected/9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b-kube-api-access-kmgtm\") pod \"openshift-apiserver-operator-796bbdcf4f-7h5xf\" (UID: \"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.568958 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47lzq\" (UniqueName: \"kubernetes.io/projected/384180d0-d0ee-41ed-bf82-b19b416e5972-kube-api-access-47lzq\") pod \"dns-operator-744455d44c-tph6g\" (UID: \"384180d0-d0ee-41ed-bf82-b19b416e5972\") " pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.568979 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b5de2b3-a81b-458d-a061-e6f814de897b-trusted-ca\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569040 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f921ae-dc74-4acf-a699-a5a5e574224d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h5czz\" (UID: \"41f921ae-dc74-4acf-a699-a5a5e574224d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569077 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-trusted-ca-bundle\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569101 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-serving-cert\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569121 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41932c5e-1add-49b1-8876-43ee2e9a4a91-etcd-client\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569144 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7935a1e-ca2c-4dcb-87cf-c0269819a682-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xjlqq\" (UID: \"e7935a1e-ca2c-4dcb-87cf-c0269819a682\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569168 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41932c5e-1add-49b1-8876-43ee2e9a4a91-serving-cert\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569189 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcckb\" (UniqueName: \"kubernetes.io/projected/9c48e9b8-f48a-48d9-921a-8274c0cb430a-kube-api-access-rcckb\") pod \"kube-storage-version-migrator-operator-b67b599dd-br9vq\" (UID: \"9c48e9b8-f48a-48d9-921a-8274c0cb430a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569222 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ttck\" (UniqueName: \"kubernetes.io/projected/3c042f74-11a5-46a9-bc05-6b3278428e36-kube-api-access-8ttck\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569243 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e7935a1e-ca2c-4dcb-87cf-c0269819a682-proxy-tls\") pod \"machine-config-controller-84d6567774-xjlqq\" (UID: \"e7935a1e-ca2c-4dcb-87cf-c0269819a682\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569291 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/384180d0-d0ee-41ed-bf82-b19b416e5972-metrics-tls\") pod \"dns-operator-744455d44c-tph6g\" (UID: \"384180d0-d0ee-41ed-bf82-b19b416e5972\") " pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569313 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e43c4105-3759-46ee-bde3-191e4ce3c318-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569334 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f921ae-dc74-4acf-a699-a5a5e574224d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h5czz\" (UID: \"41f921ae-dc74-4acf-a699-a5a5e574224d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569360 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2z7c\" (UniqueName: \"kubernetes.io/projected/7b5de2b3-a81b-458d-a061-e6f814de897b-kube-api-access-b2z7c\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569382 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dph8j\" (UniqueName: \"kubernetes.io/projected/e43c4105-3759-46ee-bde3-191e4ce3c318-kube-api-access-dph8j\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569442 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5029ef8a-c6f0-43e4-b25c-d2f695020357-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qrvwl\" (UID: \"5029ef8a-c6f0-43e4-b25c-d2f695020357\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569473 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b5de2b3-a81b-458d-a061-e6f814de897b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569511 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41932c5e-1add-49b1-8876-43ee2e9a4a91-config\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569533 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e43c4105-3759-46ee-bde3-191e4ce3c318-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569561 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-service-ca\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569601 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5029ef8a-c6f0-43e4-b25c-d2f695020357-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qrvwl\" (UID: \"5029ef8a-c6f0-43e4-b25c-d2f695020357\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569624 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41932c5e-1add-49b1-8876-43ee2e9a4a91-etcd-service-ca\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569648 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7h5xf\" (UID: \"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569693 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k4xq\" (UniqueName: \"kubernetes.io/projected/e7935a1e-ca2c-4dcb-87cf-c0269819a682-kube-api-access-8k4xq\") pod \"machine-config-controller-84d6567774-xjlqq\" (UID: \"e7935a1e-ca2c-4dcb-87cf-c0269819a682\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569721 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c48e9b8-f48a-48d9-921a-8274c0cb430a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-br9vq\" (UID: \"9c48e9b8-f48a-48d9-921a-8274c0cb430a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569744 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c48e9b8-f48a-48d9-921a-8274c0cb430a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-br9vq\" (UID: \"9c48e9b8-f48a-48d9-921a-8274c0cb430a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569768 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc067d60-aa98-4651-aad2-4d9dd7ee2683-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7stbl\" (UID: \"cc067d60-aa98-4651-aad2-4d9dd7ee2683\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569790 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq6jp\" (UniqueName: \"kubernetes.io/projected/cc067d60-aa98-4651-aad2-4d9dd7ee2683-kube-api-access-gq6jp\") pod \"cluster-samples-operator-665b6dd947-7stbl\" (UID: \"cc067d60-aa98-4651-aad2-4d9dd7ee2683\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569815 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-oauth-config\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569836 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvjlv\" (UniqueName: \"kubernetes.io/projected/41932c5e-1add-49b1-8876-43ee2e9a4a91-kube-api-access-dvjlv\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569854 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f921ae-dc74-4acf-a699-a5a5e574224d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-h5czz\" (UID: \"41f921ae-dc74-4acf-a699-a5a5e574224d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569858 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5029ef8a-c6f0-43e4-b25c-d2f695020357-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qrvwl\" (UID: \"5029ef8a-c6f0-43e4-b25c-d2f695020357\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569904 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f25x\" (UniqueName: \"kubernetes.io/projected/0e8734b5-4294-4091-b377-680aa4178a19-kube-api-access-9f25x\") pod \"downloads-7954f5f757-s9dzs\" (UID: \"0e8734b5-4294-4091-b377-680aa4178a19\") " pod="openshift-console/downloads-7954f5f757-s9dzs" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569931 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-oauth-serving-cert\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569956 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk9qt\" (UniqueName: \"kubernetes.io/projected/41f921ae-dc74-4acf-a699-a5a5e574224d-kube-api-access-jk9qt\") pod \"openshift-controller-manager-operator-756b6f6bc6-h5czz\" (UID: \"41f921ae-dc74-4acf-a699-a5a5e574224d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569979 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-console-config\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.569999 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7h5xf\" (UID: \"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.570104 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-trusted-ca-bundle\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.570205 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e43c4105-3759-46ee-bde3-191e4ce3c318-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.570585 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7h5xf\" (UID: \"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.571397 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-console-config\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.571630 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41932c5e-1add-49b1-8876-43ee2e9a4a91-etcd-ca\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.571739 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-oauth-serving-cert\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.572188 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e7935a1e-ca2c-4dcb-87cf-c0269819a682-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xjlqq\" (UID: \"e7935a1e-ca2c-4dcb-87cf-c0269819a682\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.572669 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5029ef8a-c6f0-43e4-b25c-d2f695020357-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qrvwl\" (UID: \"5029ef8a-c6f0-43e4-b25c-d2f695020357\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.573079 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5029ef8a-c6f0-43e4-b25c-d2f695020357-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qrvwl\" (UID: \"5029ef8a-c6f0-43e4-b25c-d2f695020357\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.573118 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41932c5e-1add-49b1-8876-43ee2e9a4a91-config\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.573300 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-service-ca\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.573833 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-serving-cert\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.573921 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41932c5e-1add-49b1-8876-43ee2e9a4a91-etcd-service-ca\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.573988 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41932c5e-1add-49b1-8876-43ee2e9a4a91-etcd-client\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.575904 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7h5xf\" (UID: \"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.576294 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-oauth-config\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.576801 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41932c5e-1add-49b1-8876-43ee2e9a4a91-serving-cert\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.577813 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cc067d60-aa98-4651-aad2-4d9dd7ee2683-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7stbl\" (UID: \"cc067d60-aa98-4651-aad2-4d9dd7ee2683\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.577811 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f921ae-dc74-4acf-a699-a5a5e574224d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-h5czz\" (UID: \"41f921ae-dc74-4acf-a699-a5a5e574224d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.578516 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/384180d0-d0ee-41ed-bf82-b19b416e5972-metrics-tls\") pod \"dns-operator-744455d44c-tph6g\" (UID: \"384180d0-d0ee-41ed-bf82-b19b416e5972\") " pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.578646 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e43c4105-3759-46ee-bde3-191e4ce3c318-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.581691 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.608998 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.610361 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b5de2b3-a81b-458d-a061-e6f814de897b-trusted-ca\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.621923 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.642092 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.662056 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.681893 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.701952 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.721900 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.742105 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.746542 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e7935a1e-ca2c-4dcb-87cf-c0269819a682-proxy-tls\") pod \"machine-config-controller-84d6567774-xjlqq\" (UID: \"e7935a1e-ca2c-4dcb-87cf-c0269819a682\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.763559 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.782364 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.802221 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.812152 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7b5de2b3-a81b-458d-a061-e6f814de897b-metrics-tls\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.821746 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.847896 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.861431 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.882419 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.907023 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.922149 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.942709 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.961939 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 11:51:45 crc kubenswrapper[4717]: I0218 11:51:45.981959 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.002762 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.021665 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.042393 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.062050 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.093697 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.142453 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.162422 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.182869 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.195700 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c48e9b8-f48a-48d9-921a-8274c0cb430a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-br9vq\" (UID: \"9c48e9b8-f48a-48d9-921a-8274c0cb430a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.202443 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.203797 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c48e9b8-f48a-48d9-921a-8274c0cb430a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-br9vq\" (UID: \"9c48e9b8-f48a-48d9-921a-8274c0cb430a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.221824 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.242575 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.262382 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.282005 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.302701 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.321915 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.342586 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.361921 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.383094 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.402079 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.420213 4717 request.go:700] Waited for 1.011641102s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.421866 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.441952 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.461408 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.482894 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.502444 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.522676 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.542129 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.561741 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.582379 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.602282 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.622498 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.642508 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.661991 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.682640 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.702576 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.722171 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.742012 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.761501 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.781819 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.802437 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.823171 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.842467 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.862778 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.882606 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.903077 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.921611 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.942722 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.962279 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 11:51:46 crc kubenswrapper[4717]: I0218 11:51:46.982494 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.008184 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.022430 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.041871 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.061828 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.096225 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv7sq\" (UniqueName: \"kubernetes.io/projected/0546320f-3929-4452-a505-bdbb872741ad-kube-api-access-vv7sq\") pod \"apiserver-76f77b778f-z4csx\" (UID: \"0546320f-3929-4452-a505-bdbb872741ad\") " pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.098154 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.117477 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8m9r\" (UniqueName: \"kubernetes.io/projected/baa2972a-fc13-4b3b-bf4b-9dceaf35db41-kube-api-access-q8m9r\") pod \"machine-api-operator-5694c8668f-kzbdk\" (UID: \"baa2972a-fc13-4b3b-bf4b-9dceaf35db41\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.136214 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s96p\" (UniqueName: \"kubernetes.io/projected/ff0e32a4-5a0f-4779-a1db-b67aec04f414-kube-api-access-8s96p\") pod \"route-controller-manager-6576b87f9c-vkklc\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.149351 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.157846 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72bjr\" (UniqueName: \"kubernetes.io/projected/459ee658-4b95-447e-9356-393ff12613b5-kube-api-access-72bjr\") pod \"authentication-operator-69f744f599-2xjnq\" (UID: \"459ee658-4b95-447e-9356-393ff12613b5\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.176536 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvhmp\" (UniqueName: \"kubernetes.io/projected/0dc8b28c-8f88-4497-a439-2f1500cda5c2-kube-api-access-qvhmp\") pod \"machine-approver-56656f9798-96gdl\" (UID: \"0dc8b28c-8f88-4497-a439-2f1500cda5c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.184593 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.199485 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn75r\" (UniqueName: \"kubernetes.io/projected/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-kube-api-access-kn75r\") pod \"controller-manager-879f6c89f-svr2x\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.209001 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.217441 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvgk2\" (UniqueName: \"kubernetes.io/projected/6c88dca8-b239-4d98-b56a-3df7b296e4e7-kube-api-access-pvgk2\") pod \"apiserver-7bbb656c7d-6kmwp\" (UID: \"6c88dca8-b239-4d98-b56a-3df7b296e4e7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.222973 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.235746 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.244213 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.254765 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.263546 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.281762 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.284030 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z4csx"] Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.302289 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 11:51:47 crc kubenswrapper[4717]: W0218 11:51:47.310840 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0546320f_3929_4452_a505_bdbb872741ad.slice/crio-0013e28cde2add1fd31a61c9a40fd0e0179c8b689673ef7d735dc3da4254da4b WatchSource:0}: Error finding container 0013e28cde2add1fd31a61c9a40fd0e0179c8b689673ef7d735dc3da4254da4b: Status 404 returned error can't find the container with id 0013e28cde2add1fd31a61c9a40fd0e0179c8b689673ef7d735dc3da4254da4b Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.322543 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.343642 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.348086 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kzbdk"] Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.362418 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.383962 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.397279 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc"] Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.402524 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.421437 4717 request.go:700] Waited for 1.904817745s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.423517 4717 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.435681 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.444127 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.462390 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.472136 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2xjnq"] Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.473370 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-svr2x"] Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.507787 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47lzq\" (UniqueName: \"kubernetes.io/projected/384180d0-d0ee-41ed-bf82-b19b416e5972-kube-api-access-47lzq\") pod \"dns-operator-744455d44c-tph6g\" (UID: \"384180d0-d0ee-41ed-bf82-b19b416e5972\") " pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.524214 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgtm\" (UniqueName: \"kubernetes.io/projected/9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b-kube-api-access-kmgtm\") pod \"openshift-apiserver-operator-796bbdcf4f-7h5xf\" (UID: \"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:47 crc kubenswrapper[4717]: W0218 11:51:47.531498 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23a7650f_4199_4243_8ca1_07a4d4a8c8b4.slice/crio-2bea45deb9a5e375f5efbaee3241321a9db9739a9d0a8dae94ffc620c8ca520f WatchSource:0}: Error finding container 2bea45deb9a5e375f5efbaee3241321a9db9739a9d0a8dae94ffc620c8ca520f: Status 404 returned error can't find the container with id 2bea45deb9a5e375f5efbaee3241321a9db9739a9d0a8dae94ffc620c8ca520f Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.538022 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk9qt\" (UniqueName: \"kubernetes.io/projected/41f921ae-dc74-4acf-a699-a5a5e574224d-kube-api-access-jk9qt\") pod \"openshift-controller-manager-operator-756b6f6bc6-h5czz\" (UID: \"41f921ae-dc74-4acf-a699-a5a5e574224d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.560896 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f25x\" (UniqueName: \"kubernetes.io/projected/0e8734b5-4294-4091-b377-680aa4178a19-kube-api-access-9f25x\") pod \"downloads-7954f5f757-s9dzs\" (UID: \"0e8734b5-4294-4091-b377-680aa4178a19\") " pod="openshift-console/downloads-7954f5f757-s9dzs" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.579173 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e43c4105-3759-46ee-bde3-191e4ce3c318-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.598221 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dph8j\" (UniqueName: \"kubernetes.io/projected/e43c4105-3759-46ee-bde3-191e4ce3c318-kube-api-access-dph8j\") pod \"cluster-image-registry-operator-dc59b4c8b-svmld\" (UID: \"e43c4105-3759-46ee-bde3-191e4ce3c318\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.617400 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2z7c\" (UniqueName: \"kubernetes.io/projected/7b5de2b3-a81b-458d-a061-e6f814de897b-kube-api-access-b2z7c\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.620057 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.638030 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.639941 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ttck\" (UniqueName: \"kubernetes.io/projected/3c042f74-11a5-46a9-bc05-6b3278428e36-kube-api-access-8ttck\") pod \"console-f9d7485db-mk6cn\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.644322 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.651973 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.652001 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp"] Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.665694 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5029ef8a-c6f0-43e4-b25c-d2f695020357-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qrvwl\" (UID: \"5029ef8a-c6f0-43e4-b25c-d2f695020357\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.668447 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-s9dzs" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.678239 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k4xq\" (UniqueName: \"kubernetes.io/projected/e7935a1e-ca2c-4dcb-87cf-c0269819a682-kube-api-access-8k4xq\") pod \"machine-config-controller-84d6567774-xjlqq\" (UID: \"e7935a1e-ca2c-4dcb-87cf-c0269819a682\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.700156 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b5de2b3-a81b-458d-a061-e6f814de897b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dj42d\" (UID: \"7b5de2b3-a81b-458d-a061-e6f814de897b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.706412 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.717581 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq6jp\" (UniqueName: \"kubernetes.io/projected/cc067d60-aa98-4651-aad2-4d9dd7ee2683-kube-api-access-gq6jp\") pod \"cluster-samples-operator-665b6dd947-7stbl\" (UID: \"cc067d60-aa98-4651-aad2-4d9dd7ee2683\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.740407 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcckb\" (UniqueName: \"kubernetes.io/projected/9c48e9b8-f48a-48d9-921a-8274c0cb430a-kube-api-access-rcckb\") pod \"kube-storage-version-migrator-operator-b67b599dd-br9vq\" (UID: \"9c48e9b8-f48a-48d9-921a-8274c0cb430a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.745311 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.760698 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvjlv\" (UniqueName: \"kubernetes.io/projected/41932c5e-1add-49b1-8876-43ee2e9a4a91-kube-api-access-dvjlv\") pod \"etcd-operator-b45778765-sgvs4\" (UID: \"41932c5e-1add-49b1-8876-43ee2e9a4a91\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.762556 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.770328 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806350 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9b3201b-b5e0-4a95-9015-97309eb9957e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806429 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806460 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-tls\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806478 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-certificates\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806499 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806547 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f172f40-f7d9-49a1-acf0-b2596b2c3bde-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bs5zz\" (UID: \"1f172f40-f7d9-49a1-acf0-b2596b2c3bde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806566 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-audit-policies\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806584 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806604 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f172f40-f7d9-49a1-acf0-b2596b2c3bde-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bs5zz\" (UID: \"1f172f40-f7d9-49a1-acf0-b2596b2c3bde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806623 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ef7ad6b-12b9-46b2-96e4-f4634170ab20-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7fgq4\" (UID: \"6ef7ad6b-12b9-46b2-96e4-f4634170ab20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806693 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef7ad6b-12b9-46b2-96e4-f4634170ab20-config\") pod \"kube-controller-manager-operator-78b949d7b-7fgq4\" (UID: \"6ef7ad6b-12b9-46b2-96e4-f4634170ab20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.806727 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: E0218 11:51:47.808555 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:48.308533223 +0000 UTC m=+142.710634539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.808665 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj7dr\" (UniqueName: \"kubernetes.io/projected/518edf1a-e4f5-450a-90ff-151dc3106649-kube-api-access-nj7dr\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.809067 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/518edf1a-e4f5-450a-90ff-151dc3106649-audit-dir\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.809139 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.809180 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.809379 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-bound-sa-token\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.809477 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f172f40-f7d9-49a1-acf0-b2596b2c3bde-config\") pod \"kube-apiserver-operator-766d6c64bb-bs5zz\" (UID: \"1f172f40-f7d9-49a1-acf0-b2596b2c3bde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.809549 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.809582 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.810060 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-trusted-ca\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.810125 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ef7ad6b-12b9-46b2-96e4-f4634170ab20-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7fgq4\" (UID: \"6ef7ad6b-12b9-46b2-96e4-f4634170ab20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.810161 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5n5g\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-kube-api-access-x5n5g\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.810189 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9b3201b-b5e0-4a95-9015-97309eb9957e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.810217 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.810307 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.810335 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.810488 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.824820 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" event={"ID":"0dc8b28c-8f88-4497-a439-2f1500cda5c2","Type":"ContainerStarted","Data":"16187053b7dc2717f6d072961a767813acf3c7db4b58f87f3d571c3127be6f7b"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.824898 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" event={"ID":"0dc8b28c-8f88-4497-a439-2f1500cda5c2","Type":"ContainerStarted","Data":"cb47eb73f799c66d1d759eb26bcd6e99255f8e1e1aa7abf1573294622bc2ca2e"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.839597 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" event={"ID":"baa2972a-fc13-4b3b-bf4b-9dceaf35db41","Type":"ContainerStarted","Data":"edcd08c5833a5339ca83696f66e1229d4cbf3f3f3a5a96ad3ba63f412422b905"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.839994 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" event={"ID":"baa2972a-fc13-4b3b-bf4b-9dceaf35db41","Type":"ContainerStarted","Data":"2138baa05400d52b843e85a4cf3f6b672dd7a300d1841c587c005f37118e3061"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.840009 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" event={"ID":"baa2972a-fc13-4b3b-bf4b-9dceaf35db41","Type":"ContainerStarted","Data":"d3e90e52780d75b0ce0c62180a5b939c28e28a2675d66a2ab2bf061146f8e1c0"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.844646 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" event={"ID":"459ee658-4b95-447e-9356-393ff12613b5","Type":"ContainerStarted","Data":"3a56fe0302f15c5fb2642d661949e6f1b1f22698f91c30b5e3688e7b346da829"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.844691 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" event={"ID":"459ee658-4b95-447e-9356-393ff12613b5","Type":"ContainerStarted","Data":"b73fbd20c5821c1d8927aaf1ffc5c8a4dc490fc8037f4dd273a6e9972f66c4d9"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.857569 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" event={"ID":"23a7650f-4199-4243-8ca1-07a4d4a8c8b4","Type":"ContainerStarted","Data":"9e83bdb0b2a3f3a22caf14272bd13500331da514c4bfd44cb648717114fd1333"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.857609 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" event={"ID":"23a7650f-4199-4243-8ca1-07a4d4a8c8b4","Type":"ContainerStarted","Data":"2bea45deb9a5e375f5efbaee3241321a9db9739a9d0a8dae94ffc620c8ca520f"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.857874 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.859471 4717 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-svr2x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.859523 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" podUID="23a7650f-4199-4243-8ca1-07a4d4a8c8b4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.863579 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" event={"ID":"ff0e32a4-5a0f-4779-a1db-b67aec04f414","Type":"ContainerStarted","Data":"bb36b837b87d3615978bc8e3d30c73c8ce6dd6a883ee8c76d1c50fa5267bfc7c"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.863632 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" event={"ID":"ff0e32a4-5a0f-4779-a1db-b67aec04f414","Type":"ContainerStarted","Data":"194c1f5c409f41486ebe5b73b1f3282ce948bc7a22e5019817ffd8cb6c2e32a3"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.864486 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.866356 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" event={"ID":"6c88dca8-b239-4d98-b56a-3df7b296e4e7","Type":"ContainerStarted","Data":"cd229224d94301d0d112fb5591d9e4a4a48b08b8d1f9f879f1097a5ea87913fa"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.869097 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.874896 4717 generic.go:334] "Generic (PLEG): container finished" podID="0546320f-3929-4452-a505-bdbb872741ad" containerID="030c505ed81dbd04c4ea6f1c76142734981b4f750827aa8966817ee319b67ad9" exitCode=0 Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.874961 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" event={"ID":"0546320f-3929-4452-a505-bdbb872741ad","Type":"ContainerDied","Data":"030c505ed81dbd04c4ea6f1c76142734981b4f750827aa8966817ee319b67ad9"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.874995 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" event={"ID":"0546320f-3929-4452-a505-bdbb872741ad","Type":"ContainerStarted","Data":"0013e28cde2add1fd31a61c9a40fd0e0179c8b689673ef7d735dc3da4254da4b"} Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.877298 4717 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vkklc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.877358 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" podUID="ff0e32a4-5a0f-4779-a1db-b67aec04f414" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.914943 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915093 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f172f40-f7d9-49a1-acf0-b2596b2c3bde-config\") pod \"kube-apiserver-operator-766d6c64bb-bs5zz\" (UID: \"1f172f40-f7d9-49a1-acf0-b2596b2c3bde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915121 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915150 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5n5g\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-kube-api-access-x5n5g\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915171 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915193 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915230 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f579c52-f579-4acb-824a-46d88361ce98-config\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915274 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f579c52-f579-4acb-824a-46d88361ce98-serving-cert\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915300 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915332 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915352 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-audit-policies\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915372 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f172f40-f7d9-49a1-acf0-b2596b2c3bde-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bs5zz\" (UID: \"1f172f40-f7d9-49a1-acf0-b2596b2c3bde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915400 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915421 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj7dr\" (UniqueName: \"kubernetes.io/projected/518edf1a-e4f5-450a-90ff-151dc3106649-kube-api-access-nj7dr\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915443 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-bound-sa-token\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915465 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm4kb\" (UniqueName: \"kubernetes.io/projected/0f2b6b73-0ea9-4de9-9bc4-e76322e04a91-kube-api-access-lm4kb\") pod \"openshift-config-operator-7777fb866f-x6rkx\" (UID: \"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915500 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915523 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-trusted-ca\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915544 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dfp\" (UniqueName: \"kubernetes.io/projected/2f579c52-f579-4acb-824a-46d88361ce98-kube-api-access-p2dfp\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915569 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ef7ad6b-12b9-46b2-96e4-f4634170ab20-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7fgq4\" (UID: \"6ef7ad6b-12b9-46b2-96e4-f4634170ab20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915593 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9b3201b-b5e0-4a95-9015-97309eb9957e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915612 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915638 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915659 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9b3201b-b5e0-4a95-9015-97309eb9957e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915682 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-tls\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915700 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-certificates\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915721 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f579c52-f579-4acb-824a-46d88361ce98-trusted-ca\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915756 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f172f40-f7d9-49a1-acf0-b2596b2c3bde-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bs5zz\" (UID: \"1f172f40-f7d9-49a1-acf0-b2596b2c3bde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915790 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0f2b6b73-0ea9-4de9-9bc4-e76322e04a91-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x6rkx\" (UID: \"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915812 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ef7ad6b-12b9-46b2-96e4-f4634170ab20-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7fgq4\" (UID: \"6ef7ad6b-12b9-46b2-96e4-f4634170ab20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915834 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f2b6b73-0ea9-4de9-9bc4-e76322e04a91-serving-cert\") pod \"openshift-config-operator-7777fb866f-x6rkx\" (UID: \"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915857 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef7ad6b-12b9-46b2-96e4-f4634170ab20-config\") pod \"kube-controller-manager-operator-78b949d7b-7fgq4\" (UID: \"6ef7ad6b-12b9-46b2-96e4-f4634170ab20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915883 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/518edf1a-e4f5-450a-90ff-151dc3106649-audit-dir\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915902 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.915923 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.918533 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: E0218 11:51:47.918703 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:48.418657895 +0000 UTC m=+142.820759211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.919301 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-certificates\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.920113 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-trusted-ca\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.921195 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef7ad6b-12b9-46b2-96e4-f4634170ab20-config\") pod \"kube-controller-manager-operator-78b949d7b-7fgq4\" (UID: \"6ef7ad6b-12b9-46b2-96e4-f4634170ab20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.921520 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f172f40-f7d9-49a1-acf0-b2596b2c3bde-config\") pod \"kube-apiserver-operator-766d6c64bb-bs5zz\" (UID: \"1f172f40-f7d9-49a1-acf0-b2596b2c3bde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.921785 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/518edf1a-e4f5-450a-90ff-151dc3106649-audit-dir\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.922241 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.922365 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9b3201b-b5e0-4a95-9015-97309eb9957e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.922534 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-audit-policies\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.924147 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.930302 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.933911 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9b3201b-b5e0-4a95-9015-97309eb9957e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.934681 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.937881 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.948557 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.951218 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.951315 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.952792 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-tls\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.957000 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f172f40-f7d9-49a1-acf0-b2596b2c3bde-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bs5zz\" (UID: \"1f172f40-f7d9-49a1-acf0-b2596b2c3bde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.957019 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.957040 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ef7ad6b-12b9-46b2-96e4-f4634170ab20-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7fgq4\" (UID: \"6ef7ad6b-12b9-46b2-96e4-f4634170ab20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.957458 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.966517 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.973416 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5n5g\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-kube-api-access-x5n5g\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.974428 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:47 crc kubenswrapper[4717]: I0218 11:51:47.985688 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj7dr\" (UniqueName: \"kubernetes.io/projected/518edf1a-e4f5-450a-90ff-151dc3106649-kube-api-access-nj7dr\") pod \"oauth-openshift-558db77b4-rzqs7\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.007874 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ef7ad6b-12b9-46b2-96e4-f4634170ab20-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7fgq4\" (UID: \"6ef7ad6b-12b9-46b2-96e4-f4634170ab20\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.017283 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc2d4b81-f76c-46fb-bb47-d575784e849b-config-volume\") pod \"dns-default-zlwvm\" (UID: \"fc2d4b81-f76c-46fb-bb47-d575784e849b\") " pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.017318 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-plugins-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.017361 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84n7m\" (UniqueName: \"kubernetes.io/projected/016f064e-8db6-41ed-a2af-2d9ea9169703-kube-api-access-84n7m\") pod \"marketplace-operator-79b997595-qtw2w\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.017382 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a9e5d6c2-0001-4e35-9ccd-f9096cf2431d-signing-cabundle\") pod \"service-ca-9c57cc56f-sh4qc\" (UID: \"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d\") " pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.017411 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78e76a00-064d-419f-bc39-a6e0d81e3176-metrics-certs\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.017427 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz86j\" (UniqueName: \"kubernetes.io/projected/c9a6347e-94c1-41c6-829a-73fccd79a3ea-kube-api-access-mz86j\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.017517 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j47vq\" (UniqueName: \"kubernetes.io/projected/60cce315-538e-4cd3-9fcc-d9aeab94b60d-kube-api-access-j47vq\") pod \"catalog-operator-68c6474976-9h9n9\" (UID: \"60cce315-538e-4cd3-9fcc-d9aeab94b60d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.017548 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f579c52-f579-4acb-824a-46d88361ce98-config\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.017576 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e856c873-7b06-4491-a886-800c11b0ce7a-node-bootstrap-token\") pod \"machine-config-server-k2lck\" (UID: \"e856c873-7b06-4491-a886-800c11b0ce7a\") " pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018268 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f579c52-f579-4acb-824a-46d88361ce98-serving-cert\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018311 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8xxf\" (UniqueName: \"kubernetes.io/projected/54297048-fbeb-4aca-941a-90ab437b2068-kube-api-access-r8xxf\") pod \"service-ca-operator-777779d784-z28q6\" (UID: \"54297048-fbeb-4aca-941a-90ab437b2068\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018337 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018385 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzqwl\" (UniqueName: \"kubernetes.io/projected/76d81a79-d7c1-427d-a8e2-11a62eee22cb-kube-api-access-nzqwl\") pod \"olm-operator-6b444d44fb-vjmm4\" (UID: \"76d81a79-d7c1-427d-a8e2-11a62eee22cb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018414 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tp98\" (UniqueName: \"kubernetes.io/projected/23ebf6ad-2b3d-4d30-885f-d8060245af0c-kube-api-access-4tp98\") pod \"multus-admission-controller-857f4d67dd-4gqhf\" (UID: \"23ebf6ad-2b3d-4d30-885f-d8060245af0c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018431 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc2d4b81-f76c-46fb-bb47-d575784e849b-metrics-tls\") pod \"dns-default-zlwvm\" (UID: \"fc2d4b81-f76c-46fb-bb47-d575784e849b\") " pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018475 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54297048-fbeb-4aca-941a-90ab437b2068-config\") pod \"service-ca-operator-777779d784-z28q6\" (UID: \"54297048-fbeb-4aca-941a-90ab437b2068\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018492 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-socket-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018520 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/624afd8c-8a76-42e8-b83d-d122f22464f9-webhook-cert\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018540 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e856c873-7b06-4491-a886-800c11b0ce7a-certs\") pod \"machine-config-server-k2lck\" (UID: \"e856c873-7b06-4491-a886-800c11b0ce7a\") " pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018597 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60cce315-538e-4cd3-9fcc-d9aeab94b60d-profile-collector-cert\") pod \"catalog-operator-68c6474976-9h9n9\" (UID: \"60cce315-538e-4cd3-9fcc-d9aeab94b60d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018933 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dgjk\" (UniqueName: \"kubernetes.io/projected/624afd8c-8a76-42e8-b83d-d122f22464f9-kube-api-access-7dgjk\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018958 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dkzb\" (UniqueName: \"kubernetes.io/projected/fc2d4b81-f76c-46fb-bb47-d575784e849b-kube-api-access-9dkzb\") pod \"dns-default-zlwvm\" (UID: \"fc2d4b81-f76c-46fb-bb47-d575784e849b\") " pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.018991 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcnxw\" (UniqueName: \"kubernetes.io/projected/3fd93e51-d24d-4242-9224-372c409de81e-kube-api-access-qcnxw\") pod \"migrator-59844c95c7-cljzk\" (UID: \"3fd93e51-d24d-4242-9224-372c409de81e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019034 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a9e5d6c2-0001-4e35-9ccd-f9096cf2431d-signing-key\") pod \"service-ca-9c57cc56f-sh4qc\" (UID: \"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d\") " pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019049 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/78e76a00-064d-419f-bc39-a6e0d81e3176-default-certificate\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019064 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dptf\" (UniqueName: \"kubernetes.io/projected/78e76a00-064d-419f-bc39-a6e0d81e3176-kube-api-access-8dptf\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019107 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60cce315-538e-4cd3-9fcc-d9aeab94b60d-srv-cert\") pod \"catalog-operator-68c6474976-9h9n9\" (UID: \"60cce315-538e-4cd3-9fcc-d9aeab94b60d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019133 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjtzr\" (UniqueName: \"kubernetes.io/projected/cdaadf82-1fa9-426e-998f-096124387143-kube-api-access-qjtzr\") pod \"ingress-canary-zss8p\" (UID: \"cdaadf82-1fa9-426e-998f-096124387143\") " pod="openshift-ingress-canary/ingress-canary-zss8p" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019187 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm4kb\" (UniqueName: \"kubernetes.io/projected/0f2b6b73-0ea9-4de9-9bc4-e76322e04a91-kube-api-access-lm4kb\") pod \"openshift-config-operator-7777fb866f-x6rkx\" (UID: \"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019750 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-csi-data-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019804 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2dfp\" (UniqueName: \"kubernetes.io/projected/2f579c52-f579-4acb-824a-46d88361ce98-kube-api-access-p2dfp\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019821 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/624afd8c-8a76-42e8-b83d-d122f22464f9-tmpfs\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019836 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6mk\" (UniqueName: \"kubernetes.io/projected/a9e5d6c2-0001-4e35-9ccd-f9096cf2431d-kube-api-access-jh6mk\") pod \"service-ca-9c57cc56f-sh4qc\" (UID: \"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d\") " pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019852 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qtw2w\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019867 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfcbb5c0-314c-427c-8897-987309aa9965-proxy-tls\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.019891 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76d81a79-d7c1-427d-a8e2-11a62eee22cb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vjmm4\" (UID: \"76d81a79-d7c1-427d-a8e2-11a62eee22cb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020016 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh5wb\" (UniqueName: \"kubernetes.io/projected/c53aaac1-4a8c-439e-8d51-60054a95ed11-kube-api-access-mh5wb\") pod \"control-plane-machine-set-operator-78cbb6b69f-jmlmn\" (UID: \"c53aaac1-4a8c-439e-8d51-60054a95ed11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020033 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cgz2\" (UniqueName: \"kubernetes.io/projected/0c2e253c-448f-448b-8419-b898112f632c-kube-api-access-6cgz2\") pod \"collect-profiles-29523585-62qk2\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020068 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qtw2w\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020083 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8wg\" (UniqueName: \"kubernetes.io/projected/e856c873-7b06-4491-a886-800c11b0ce7a-kube-api-access-dj8wg\") pod \"machine-config-server-k2lck\" (UID: \"e856c873-7b06-4491-a886-800c11b0ce7a\") " pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020099 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5tt\" (UniqueName: \"kubernetes.io/projected/dfcbb5c0-314c-427c-8897-987309aa9965-kube-api-access-rn5tt\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020117 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-registration-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020143 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4hfl\" (UniqueName: \"kubernetes.io/projected/4d5c57e4-bfdb-46b7-b58a-8c71f90c3321-kube-api-access-n4hfl\") pod \"package-server-manager-789f6589d5-vw62k\" (UID: \"4d5c57e4-bfdb-46b7-b58a-8c71f90c3321\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020160 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5c57e4-bfdb-46b7-b58a-8c71f90c3321-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vw62k\" (UID: \"4d5c57e4-bfdb-46b7-b58a-8c71f90c3321\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020222 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f579c52-f579-4acb-824a-46d88361ce98-trusted-ca\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020287 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0f2b6b73-0ea9-4de9-9bc4-e76322e04a91-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x6rkx\" (UID: \"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020304 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f2b6b73-0ea9-4de9-9bc4-e76322e04a91-serving-cert\") pod \"openshift-config-operator-7777fb866f-x6rkx\" (UID: \"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020325 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c53aaac1-4a8c-439e-8d51-60054a95ed11-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jmlmn\" (UID: \"c53aaac1-4a8c-439e-8d51-60054a95ed11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020344 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/78e76a00-064d-419f-bc39-a6e0d81e3176-stats-auth\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020359 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c2e253c-448f-448b-8419-b898112f632c-config-volume\") pod \"collect-profiles-29523585-62qk2\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020406 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54297048-fbeb-4aca-941a-90ab437b2068-serving-cert\") pod \"service-ca-operator-777779d784-z28q6\" (UID: \"54297048-fbeb-4aca-941a-90ab437b2068\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020454 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23ebf6ad-2b3d-4d30-885f-d8060245af0c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4gqhf\" (UID: \"23ebf6ad-2b3d-4d30-885f-d8060245af0c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020484 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfcbb5c0-314c-427c-8897-987309aa9965-images\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020556 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c2e253c-448f-448b-8419-b898112f632c-secret-volume\") pod \"collect-profiles-29523585-62qk2\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020585 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78e76a00-064d-419f-bc39-a6e0d81e3176-service-ca-bundle\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020600 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdaadf82-1fa9-426e-998f-096124387143-cert\") pod \"ingress-canary-zss8p\" (UID: \"cdaadf82-1fa9-426e-998f-096124387143\") " pod="openshift-ingress-canary/ingress-canary-zss8p" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020615 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfcbb5c0-314c-427c-8897-987309aa9965-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020669 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/624afd8c-8a76-42e8-b83d-d122f22464f9-apiservice-cert\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020685 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76d81a79-d7c1-427d-a8e2-11a62eee22cb-srv-cert\") pod \"olm-operator-6b444d44fb-vjmm4\" (UID: \"76d81a79-d7c1-427d-a8e2-11a62eee22cb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.020734 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-mountpoint-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.023022 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f172f40-f7d9-49a1-acf0-b2596b2c3bde-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bs5zz\" (UID: \"1f172f40-f7d9-49a1-acf0-b2596b2c3bde\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.025768 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f579c52-f579-4acb-824a-46d88361ce98-config\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.033394 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.039668 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0f2b6b73-0ea9-4de9-9bc4-e76322e04a91-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x6rkx\" (UID: \"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.040124 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.040278 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:48.540244943 +0000 UTC m=+142.942346459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.053140 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.068011 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f2b6b73-0ea9-4de9-9bc4-e76322e04a91-serving-cert\") pod \"openshift-config-operator-7777fb866f-x6rkx\" (UID: \"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.079944 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-bound-sa-token\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.082332 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.087026 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f579c52-f579-4acb-824a-46d88361ce98-serving-cert\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.087604 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f579c52-f579-4acb-824a-46d88361ce98-trusted-ca\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.092689 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm4kb\" (UniqueName: \"kubernetes.io/projected/0f2b6b73-0ea9-4de9-9bc4-e76322e04a91-kube-api-access-lm4kb\") pod \"openshift-config-operator-7777fb866f-x6rkx\" (UID: \"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.094535 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.100193 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2dfp\" (UniqueName: \"kubernetes.io/projected/2f579c52-f579-4acb-824a-46d88361ce98-kube-api-access-p2dfp\") pod \"console-operator-58897d9998-nj9hx\" (UID: \"2f579c52-f579-4acb-824a-46d88361ce98\") " pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:48 crc kubenswrapper[4717]: W0218 11:51:48.107927 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41f921ae_dc74_4acf_a699_a5a5e574224d.slice/crio-3f135161f8ae3a705decda04dc9943943b8e35c5ad19a5cb6b617ac4c50634fa WatchSource:0}: Error finding container 3f135161f8ae3a705decda04dc9943943b8e35c5ad19a5cb6b617ac4c50634fa: Status 404 returned error can't find the container with id 3f135161f8ae3a705decda04dc9943943b8e35c5ad19a5cb6b617ac4c50634fa Feb 18 11:51:48 crc kubenswrapper[4717]: W0218 11:51:48.109707 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aaa4ac9_0635_4d2b_8d50_6ff5a52ed26b.slice/crio-fb5234cd37135056bc7e0ccfbab8f3e131a51d11c3b9057c52b82b4a75d659c7 WatchSource:0}: Error finding container fb5234cd37135056bc7e0ccfbab8f3e131a51d11c3b9057c52b82b4a75d659c7: Status 404 returned error can't find the container with id fb5234cd37135056bc7e0ccfbab8f3e131a51d11c3b9057c52b82b4a75d659c7 Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128315 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128520 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfcbb5c0-314c-427c-8897-987309aa9965-images\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128544 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23ebf6ad-2b3d-4d30-885f-d8060245af0c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4gqhf\" (UID: \"23ebf6ad-2b3d-4d30-885f-d8060245af0c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128568 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c2e253c-448f-448b-8419-b898112f632c-secret-volume\") pod \"collect-profiles-29523585-62qk2\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128592 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78e76a00-064d-419f-bc39-a6e0d81e3176-service-ca-bundle\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128607 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfcbb5c0-314c-427c-8897-987309aa9965-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128627 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdaadf82-1fa9-426e-998f-096124387143-cert\") pod \"ingress-canary-zss8p\" (UID: \"cdaadf82-1fa9-426e-998f-096124387143\") " pod="openshift-ingress-canary/ingress-canary-zss8p" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128643 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/624afd8c-8a76-42e8-b83d-d122f22464f9-apiservice-cert\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128660 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76d81a79-d7c1-427d-a8e2-11a62eee22cb-srv-cert\") pod \"olm-operator-6b444d44fb-vjmm4\" (UID: \"76d81a79-d7c1-427d-a8e2-11a62eee22cb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128679 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-mountpoint-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128694 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc2d4b81-f76c-46fb-bb47-d575784e849b-config-volume\") pod \"dns-default-zlwvm\" (UID: \"fc2d4b81-f76c-46fb-bb47-d575784e849b\") " pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128708 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-plugins-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128725 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84n7m\" (UniqueName: \"kubernetes.io/projected/016f064e-8db6-41ed-a2af-2d9ea9169703-kube-api-access-84n7m\") pod \"marketplace-operator-79b997595-qtw2w\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128741 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a9e5d6c2-0001-4e35-9ccd-f9096cf2431d-signing-cabundle\") pod \"service-ca-9c57cc56f-sh4qc\" (UID: \"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d\") " pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128756 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78e76a00-064d-419f-bc39-a6e0d81e3176-metrics-certs\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128772 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz86j\" (UniqueName: \"kubernetes.io/projected/c9a6347e-94c1-41c6-829a-73fccd79a3ea-kube-api-access-mz86j\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128801 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j47vq\" (UniqueName: \"kubernetes.io/projected/60cce315-538e-4cd3-9fcc-d9aeab94b60d-kube-api-access-j47vq\") pod \"catalog-operator-68c6474976-9h9n9\" (UID: \"60cce315-538e-4cd3-9fcc-d9aeab94b60d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128819 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e856c873-7b06-4491-a886-800c11b0ce7a-node-bootstrap-token\") pod \"machine-config-server-k2lck\" (UID: \"e856c873-7b06-4491-a886-800c11b0ce7a\") " pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128846 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8xxf\" (UniqueName: \"kubernetes.io/projected/54297048-fbeb-4aca-941a-90ab437b2068-kube-api-access-r8xxf\") pod \"service-ca-operator-777779d784-z28q6\" (UID: \"54297048-fbeb-4aca-941a-90ab437b2068\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128872 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzqwl\" (UniqueName: \"kubernetes.io/projected/76d81a79-d7c1-427d-a8e2-11a62eee22cb-kube-api-access-nzqwl\") pod \"olm-operator-6b444d44fb-vjmm4\" (UID: \"76d81a79-d7c1-427d-a8e2-11a62eee22cb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128890 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tp98\" (UniqueName: \"kubernetes.io/projected/23ebf6ad-2b3d-4d30-885f-d8060245af0c-kube-api-access-4tp98\") pod \"multus-admission-controller-857f4d67dd-4gqhf\" (UID: \"23ebf6ad-2b3d-4d30-885f-d8060245af0c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128895 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128910 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc2d4b81-f76c-46fb-bb47-d575784e849b-metrics-tls\") pod \"dns-default-zlwvm\" (UID: \"fc2d4b81-f76c-46fb-bb47-d575784e849b\") " pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128932 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54297048-fbeb-4aca-941a-90ab437b2068-config\") pod \"service-ca-operator-777779d784-z28q6\" (UID: \"54297048-fbeb-4aca-941a-90ab437b2068\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128949 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/624afd8c-8a76-42e8-b83d-d122f22464f9-webhook-cert\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128966 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e856c873-7b06-4491-a886-800c11b0ce7a-certs\") pod \"machine-config-server-k2lck\" (UID: \"e856c873-7b06-4491-a886-800c11b0ce7a\") " pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128980 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-socket-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.128998 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60cce315-538e-4cd3-9fcc-d9aeab94b60d-profile-collector-cert\") pod \"catalog-operator-68c6474976-9h9n9\" (UID: \"60cce315-538e-4cd3-9fcc-d9aeab94b60d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129016 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dkzb\" (UniqueName: \"kubernetes.io/projected/fc2d4b81-f76c-46fb-bb47-d575784e849b-kube-api-access-9dkzb\") pod \"dns-default-zlwvm\" (UID: \"fc2d4b81-f76c-46fb-bb47-d575784e849b\") " pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129032 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcnxw\" (UniqueName: \"kubernetes.io/projected/3fd93e51-d24d-4242-9224-372c409de81e-kube-api-access-qcnxw\") pod \"migrator-59844c95c7-cljzk\" (UID: \"3fd93e51-d24d-4242-9224-372c409de81e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129048 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dgjk\" (UniqueName: \"kubernetes.io/projected/624afd8c-8a76-42e8-b83d-d122f22464f9-kube-api-access-7dgjk\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129065 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a9e5d6c2-0001-4e35-9ccd-f9096cf2431d-signing-key\") pod \"service-ca-9c57cc56f-sh4qc\" (UID: \"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d\") " pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129079 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/78e76a00-064d-419f-bc39-a6e0d81e3176-default-certificate\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129095 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dptf\" (UniqueName: \"kubernetes.io/projected/78e76a00-064d-419f-bc39-a6e0d81e3176-kube-api-access-8dptf\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129112 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60cce315-538e-4cd3-9fcc-d9aeab94b60d-srv-cert\") pod \"catalog-operator-68c6474976-9h9n9\" (UID: \"60cce315-538e-4cd3-9fcc-d9aeab94b60d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129126 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjtzr\" (UniqueName: \"kubernetes.io/projected/cdaadf82-1fa9-426e-998f-096124387143-kube-api-access-qjtzr\") pod \"ingress-canary-zss8p\" (UID: \"cdaadf82-1fa9-426e-998f-096124387143\") " pod="openshift-ingress-canary/ingress-canary-zss8p" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129156 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-csi-data-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129172 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/624afd8c-8a76-42e8-b83d-d122f22464f9-tmpfs\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129187 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6mk\" (UniqueName: \"kubernetes.io/projected/a9e5d6c2-0001-4e35-9ccd-f9096cf2431d-kube-api-access-jh6mk\") pod \"service-ca-9c57cc56f-sh4qc\" (UID: \"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d\") " pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129204 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfcbb5c0-314c-427c-8897-987309aa9965-proxy-tls\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129220 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qtw2w\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129235 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76d81a79-d7c1-427d-a8e2-11a62eee22cb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vjmm4\" (UID: \"76d81a79-d7c1-427d-a8e2-11a62eee22cb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129320 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh5wb\" (UniqueName: \"kubernetes.io/projected/c53aaac1-4a8c-439e-8d51-60054a95ed11-kube-api-access-mh5wb\") pod \"control-plane-machine-set-operator-78cbb6b69f-jmlmn\" (UID: \"c53aaac1-4a8c-439e-8d51-60054a95ed11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129339 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cgz2\" (UniqueName: \"kubernetes.io/projected/0c2e253c-448f-448b-8419-b898112f632c-kube-api-access-6cgz2\") pod \"collect-profiles-29523585-62qk2\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129355 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qtw2w\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129370 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj8wg\" (UniqueName: \"kubernetes.io/projected/e856c873-7b06-4491-a886-800c11b0ce7a-kube-api-access-dj8wg\") pod \"machine-config-server-k2lck\" (UID: \"e856c873-7b06-4491-a886-800c11b0ce7a\") " pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129386 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn5tt\" (UniqueName: \"kubernetes.io/projected/dfcbb5c0-314c-427c-8897-987309aa9965-kube-api-access-rn5tt\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129401 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-registration-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129419 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4hfl\" (UniqueName: \"kubernetes.io/projected/4d5c57e4-bfdb-46b7-b58a-8c71f90c3321-kube-api-access-n4hfl\") pod \"package-server-manager-789f6589d5-vw62k\" (UID: \"4d5c57e4-bfdb-46b7-b58a-8c71f90c3321\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129436 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5c57e4-bfdb-46b7-b58a-8c71f90c3321-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vw62k\" (UID: \"4d5c57e4-bfdb-46b7-b58a-8c71f90c3321\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129470 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c53aaac1-4a8c-439e-8d51-60054a95ed11-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jmlmn\" (UID: \"c53aaac1-4a8c-439e-8d51-60054a95ed11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129486 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/78e76a00-064d-419f-bc39-a6e0d81e3176-stats-auth\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129502 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c2e253c-448f-448b-8419-b898112f632c-config-volume\") pod \"collect-profiles-29523585-62qk2\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.129521 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54297048-fbeb-4aca-941a-90ab437b2068-serving-cert\") pod \"service-ca-operator-777779d784-z28q6\" (UID: \"54297048-fbeb-4aca-941a-90ab437b2068\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.134566 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfcbb5c0-314c-427c-8897-987309aa9965-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.135194 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78e76a00-064d-419f-bc39-a6e0d81e3176-service-ca-bundle\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.135445 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-mountpoint-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.135554 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-csi-data-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.135852 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/624afd8c-8a76-42e8-b83d-d122f22464f9-tmpfs\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.136923 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-registration-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.138839 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:48.638812326 +0000 UTC m=+143.040913642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.140322 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/78e76a00-064d-419f-bc39-a6e0d81e3176-default-certificate\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.140762 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a9e5d6c2-0001-4e35-9ccd-f9096cf2431d-signing-key\") pod \"service-ca-9c57cc56f-sh4qc\" (UID: \"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d\") " pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.142318 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c2e253c-448f-448b-8419-b898112f632c-config-volume\") pod \"collect-profiles-29523585-62qk2\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.146094 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-socket-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.147386 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc2d4b81-f76c-46fb-bb47-d575784e849b-config-volume\") pod \"dns-default-zlwvm\" (UID: \"fc2d4b81-f76c-46fb-bb47-d575784e849b\") " pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.147858 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c53aaac1-4a8c-439e-8d51-60054a95ed11-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jmlmn\" (UID: \"c53aaac1-4a8c-439e-8d51-60054a95ed11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.147242 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c9a6347e-94c1-41c6-829a-73fccd79a3ea-plugins-dir\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.148353 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54297048-fbeb-4aca-941a-90ab437b2068-config\") pod \"service-ca-operator-777779d784-z28q6\" (UID: \"54297048-fbeb-4aca-941a-90ab437b2068\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.148967 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a9e5d6c2-0001-4e35-9ccd-f9096cf2431d-signing-cabundle\") pod \"service-ca-9c57cc56f-sh4qc\" (UID: \"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d\") " pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.151637 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60cce315-538e-4cd3-9fcc-d9aeab94b60d-profile-collector-cert\") pod \"catalog-operator-68c6474976-9h9n9\" (UID: \"60cce315-538e-4cd3-9fcc-d9aeab94b60d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.151717 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc2d4b81-f76c-46fb-bb47-d575784e849b-metrics-tls\") pod \"dns-default-zlwvm\" (UID: \"fc2d4b81-f76c-46fb-bb47-d575784e849b\") " pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.152855 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54297048-fbeb-4aca-941a-90ab437b2068-serving-cert\") pod \"service-ca-operator-777779d784-z28q6\" (UID: \"54297048-fbeb-4aca-941a-90ab437b2068\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.153675 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/624afd8c-8a76-42e8-b83d-d122f22464f9-webhook-cert\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.155848 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e856c873-7b06-4491-a886-800c11b0ce7a-certs\") pod \"machine-config-server-k2lck\" (UID: \"e856c873-7b06-4491-a886-800c11b0ce7a\") " pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.156733 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e856c873-7b06-4491-a886-800c11b0ce7a-node-bootstrap-token\") pod \"machine-config-server-k2lck\" (UID: \"e856c873-7b06-4491-a886-800c11b0ce7a\") " pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.157220 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78e76a00-064d-419f-bc39-a6e0d81e3176-metrics-certs\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.161283 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/624afd8c-8a76-42e8-b83d-d122f22464f9-apiservice-cert\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.161739 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfcbb5c0-314c-427c-8897-987309aa9965-proxy-tls\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.162241 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c2e253c-448f-448b-8419-b898112f632c-secret-volume\") pod \"collect-profiles-29523585-62qk2\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.162904 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdaadf82-1fa9-426e-998f-096124387143-cert\") pod \"ingress-canary-zss8p\" (UID: \"cdaadf82-1fa9-426e-998f-096124387143\") " pod="openshift-ingress-canary/ingress-canary-zss8p" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.164634 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/78e76a00-064d-419f-bc39-a6e0d81e3176-stats-auth\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.165195 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d5c57e4-bfdb-46b7-b58a-8c71f90c3321-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vw62k\" (UID: \"4d5c57e4-bfdb-46b7-b58a-8c71f90c3321\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.165342 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/76d81a79-d7c1-427d-a8e2-11a62eee22cb-srv-cert\") pod \"olm-operator-6b444d44fb-vjmm4\" (UID: \"76d81a79-d7c1-427d-a8e2-11a62eee22cb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.165843 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/76d81a79-d7c1-427d-a8e2-11a62eee22cb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vjmm4\" (UID: \"76d81a79-d7c1-427d-a8e2-11a62eee22cb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.166697 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60cce315-538e-4cd3-9fcc-d9aeab94b60d-srv-cert\") pod \"catalog-operator-68c6474976-9h9n9\" (UID: \"60cce315-538e-4cd3-9fcc-d9aeab94b60d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.172694 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/23ebf6ad-2b3d-4d30-885f-d8060245af0c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4gqhf\" (UID: \"23ebf6ad-2b3d-4d30-885f-d8060245af0c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.186967 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-s9dzs"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.191967 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh5wb\" (UniqueName: \"kubernetes.io/projected/c53aaac1-4a8c-439e-8d51-60054a95ed11-kube-api-access-mh5wb\") pod \"control-plane-machine-set-operator-78cbb6b69f-jmlmn\" (UID: \"c53aaac1-4a8c-439e-8d51-60054a95ed11\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.193875 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dptf\" (UniqueName: \"kubernetes.io/projected/78e76a00-064d-419f-bc39-a6e0d81e3176-kube-api-access-8dptf\") pod \"router-default-5444994796-dnv67\" (UID: \"78e76a00-064d-419f-bc39-a6e0d81e3176\") " pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.198436 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjtzr\" (UniqueName: \"kubernetes.io/projected/cdaadf82-1fa9-426e-998f-096124387143-kube-api-access-qjtzr\") pod \"ingress-canary-zss8p\" (UID: \"cdaadf82-1fa9-426e-998f-096124387143\") " pod="openshift-ingress-canary/ingress-canary-zss8p" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.202912 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zss8p" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.209705 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.219289 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cgz2\" (UniqueName: \"kubernetes.io/projected/0c2e253c-448f-448b-8419-b898112f632c-kube-api-access-6cgz2\") pod \"collect-profiles-29523585-62qk2\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.230504 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.231116 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:48.731101131 +0000 UTC m=+143.133202457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.257346 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6mk\" (UniqueName: \"kubernetes.io/projected/a9e5d6c2-0001-4e35-9ccd-f9096cf2431d-kube-api-access-jh6mk\") pod \"service-ca-9c57cc56f-sh4qc\" (UID: \"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d\") " pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.275160 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qtw2w\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.275277 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfcbb5c0-314c-427c-8897-987309aa9965-images\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.280622 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj8wg\" (UniqueName: \"kubernetes.io/projected/e856c873-7b06-4491-a886-800c11b0ce7a-kube-api-access-dj8wg\") pod \"machine-config-server-k2lck\" (UID: \"e856c873-7b06-4491-a886-800c11b0ce7a\") " pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.282846 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qtw2w\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.289904 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn5tt\" (UniqueName: \"kubernetes.io/projected/dfcbb5c0-314c-427c-8897-987309aa9965-kube-api-access-rn5tt\") pod \"machine-config-operator-74547568cd-dt9kj\" (UID: \"dfcbb5c0-314c-427c-8897-987309aa9965\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.311232 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4hfl\" (UniqueName: \"kubernetes.io/projected/4d5c57e4-bfdb-46b7-b58a-8c71f90c3321-kube-api-access-n4hfl\") pod \"package-server-manager-789f6589d5-vw62k\" (UID: \"4d5c57e4-bfdb-46b7-b58a-8c71f90c3321\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" Feb 18 11:51:48 crc kubenswrapper[4717]: W0218 11:51:48.321002 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43c4105_3759_46ee_bde3_191e4ce3c318.slice/crio-41c82a8490af843ec0699e978f9cd050dfd43711c74460c1e9c12bf33ad2eaaa WatchSource:0}: Error finding container 41c82a8490af843ec0699e978f9cd050dfd43711c74460c1e9c12bf33ad2eaaa: Status 404 returned error can't find the container with id 41c82a8490af843ec0699e978f9cd050dfd43711c74460c1e9c12bf33ad2eaaa Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.322546 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.330490 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzqwl\" (UniqueName: \"kubernetes.io/projected/76d81a79-d7c1-427d-a8e2-11a62eee22cb-kube-api-access-nzqwl\") pod \"olm-operator-6b444d44fb-vjmm4\" (UID: \"76d81a79-d7c1-427d-a8e2-11a62eee22cb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.331659 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.332043 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:48.832022681 +0000 UTC m=+143.234123997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.332425 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.332799 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:48.832787963 +0000 UTC m=+143.234889279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.361857 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dkzb\" (UniqueName: \"kubernetes.io/projected/fc2d4b81-f76c-46fb-bb47-d575784e849b-kube-api-access-9dkzb\") pod \"dns-default-zlwvm\" (UID: \"fc2d4b81-f76c-46fb-bb47-d575784e849b\") " pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.388084 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.393398 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84n7m\" (UniqueName: \"kubernetes.io/projected/016f064e-8db6-41ed-a2af-2d9ea9169703-kube-api-access-84n7m\") pod \"marketplace-operator-79b997595-qtw2w\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.394028 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j47vq\" (UniqueName: \"kubernetes.io/projected/60cce315-538e-4cd3-9fcc-d9aeab94b60d-kube-api-access-j47vq\") pod \"catalog-operator-68c6474976-9h9n9\" (UID: \"60cce315-538e-4cd3-9fcc-d9aeab94b60d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.395486 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tph6g"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.400920 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.401495 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcnxw\" (UniqueName: \"kubernetes.io/projected/3fd93e51-d24d-4242-9224-372c409de81e-kube-api-access-qcnxw\") pod \"migrator-59844c95c7-cljzk\" (UID: \"3fd93e51-d24d-4242-9224-372c409de81e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.415370 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.422782 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.433652 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dgjk\" (UniqueName: \"kubernetes.io/projected/624afd8c-8a76-42e8-b83d-d122f22464f9-kube-api-access-7dgjk\") pod \"packageserver-d55dfcdfc-972tf\" (UID: \"624afd8c-8a76-42e8-b83d-d122f22464f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.433900 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.434630 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl"] Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.434633 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:48.934521087 +0000 UTC m=+143.336622403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.439964 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.445024 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tp98\" (UniqueName: \"kubernetes.io/projected/23ebf6ad-2b3d-4d30-885f-d8060245af0c-kube-api-access-4tp98\") pod \"multus-admission-controller-857f4d67dd-4gqhf\" (UID: \"23ebf6ad-2b3d-4d30-885f-d8060245af0c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.453750 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.457506 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8xxf\" (UniqueName: \"kubernetes.io/projected/54297048-fbeb-4aca-941a-90ab437b2068-kube-api-access-r8xxf\") pod \"service-ca-operator-777779d784-z28q6\" (UID: \"54297048-fbeb-4aca-941a-90ab437b2068\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.471143 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.478130 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.481321 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz86j\" (UniqueName: \"kubernetes.io/projected/c9a6347e-94c1-41c6-829a-73fccd79a3ea-kube-api-access-mz86j\") pod \"csi-hostpathplugin-s5p7r\" (UID: \"c9a6347e-94c1-41c6-829a-73fccd79a3ea\") " pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.484749 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.495333 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.511680 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k2lck" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.526141 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.535358 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.535720 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.035705273 +0000 UTC m=+143.437806599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: W0218 11:51:48.535776 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5029ef8a_c6f0_43e4_b25c_d2f695020357.slice/crio-ac5ba1bf53463fc46e6a25bb9c273f83a7335dbd8e97fee1d858c2876bbba02b WatchSource:0}: Error finding container ac5ba1bf53463fc46e6a25bb9c273f83a7335dbd8e97fee1d858c2876bbba02b: Status 404 returned error can't find the container with id ac5ba1bf53463fc46e6a25bb9c273f83a7335dbd8e97fee1d858c2876bbba02b Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.538236 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.599790 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.600475 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq"] Feb 18 11:51:48 crc kubenswrapper[4717]: W0218 11:51:48.600501 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78e76a00_064d_419f_bc39_a6e0d81e3176.slice/crio-9054eadedb67a171c84e271a0ca0d464146d517f1d30dc5b2104be74e60f7163 WatchSource:0}: Error finding container 9054eadedb67a171c84e271a0ca0d464146d517f1d30dc5b2104be74e60f7163: Status 404 returned error can't find the container with id 9054eadedb67a171c84e271a0ca0d464146d517f1d30dc5b2104be74e60f7163 Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.635648 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mk6cn"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.641291 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.641594 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.141570924 +0000 UTC m=+143.543672240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.641939 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.642075 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl"] Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.645470 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.145455125 +0000 UTC m=+143.547556431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.678498 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.691141 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.692820 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sgvs4"] Feb 18 11:51:48 crc kubenswrapper[4717]: W0218 11:51:48.698959 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b5de2b3_a81b_458d_a061_e6f814de897b.slice/crio-46a3316b07a4dc98e252341b7c7fa7fcdeb97b89d858a4b02369eb61c58550e7 WatchSource:0}: Error finding container 46a3316b07a4dc98e252341b7c7fa7fcdeb97b89d858a4b02369eb61c58550e7: Status 404 returned error can't find the container with id 46a3316b07a4dc98e252341b7c7fa7fcdeb97b89d858a4b02369eb61c58550e7 Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.704904 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.714900 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.732102 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.742755 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.742981 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.242962458 +0000 UTC m=+143.645063774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.743068 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.743658 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.243620967 +0000 UTC m=+143.645722283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.763885 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.767992 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.776307 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rzqs7"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.778404 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zss8p"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.847679 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.847961 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.347936344 +0000 UTC m=+143.750037660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.848010 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.848465 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.348451758 +0000 UTC m=+143.750553074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.891562 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nj9hx"] Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.896370 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" event={"ID":"384180d0-d0ee-41ed-bf82-b19b416e5972","Type":"ContainerStarted","Data":"6b4d19ecf35dc4cbbe39fcb7e753e641beaf8031e3d19272a432aaa3b002dc86"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.914918 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s9dzs" event={"ID":"0e8734b5-4294-4091-b377-680aa4178a19","Type":"ContainerStarted","Data":"ce6584e79bfc02910eac5bdc7e0e84296754e5546602efa8debb7aefdea9dd5f"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.926839 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" event={"ID":"41932c5e-1add-49b1-8876-43ee2e9a4a91","Type":"ContainerStarted","Data":"15136b68c8d2bae8463242c13237b75f82ffe1898d96699c7f1c1ad8a877b326"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.929810 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s5p7r"] Feb 18 11:51:48 crc kubenswrapper[4717]: W0218 11:51:48.930928 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c48e9b8_f48a_48d9_921a_8274c0cb430a.slice/crio-d0698cd0ae8938d1c684a38cf5d623d30f7bed232e94b4b4d71a8c0dc47cef64 WatchSource:0}: Error finding container d0698cd0ae8938d1c684a38cf5d623d30f7bed232e94b4b4d71a8c0dc47cef64: Status 404 returned error can't find the container with id d0698cd0ae8938d1c684a38cf5d623d30f7bed232e94b4b4d71a8c0dc47cef64 Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.932376 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dnv67" event={"ID":"78e76a00-064d-419f-bc39-a6e0d81e3176","Type":"ContainerStarted","Data":"9054eadedb67a171c84e271a0ca0d464146d517f1d30dc5b2104be74e60f7163"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.934614 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" event={"ID":"e7935a1e-ca2c-4dcb-87cf-c0269819a682","Type":"ContainerStarted","Data":"a0eabc22cb53085bab18f034f999dbc56e11a89657d71d41dfd0ac3508fce821"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.936070 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" event={"ID":"6ef7ad6b-12b9-46b2-96e4-f4634170ab20","Type":"ContainerStarted","Data":"e1d91eb4337ff3cdca0a90231b6e29f169c22f5a4a2dbfcedb4ac40bab181465"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.940288 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" event={"ID":"41f921ae-dc74-4acf-a699-a5a5e574224d","Type":"ContainerStarted","Data":"3f135161f8ae3a705decda04dc9943943b8e35c5ad19a5cb6b617ac4c50634fa"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.942379 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" event={"ID":"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91","Type":"ContainerStarted","Data":"fa75c87efd03e4f2e24e4ad8675918a17fd1f564c9b91be27c8532fbba38e21d"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.948866 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:48 crc kubenswrapper[4717]: E0218 11:51:48.949435 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.4494175 +0000 UTC m=+143.851518816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.969174 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mk6cn" event={"ID":"3c042f74-11a5-46a9-bc05-6b3278428e36","Type":"ContainerStarted","Data":"9f582af729b31e6c94b846d87f672d40061403978546399974a7b1424feb79c7"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.978851 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" event={"ID":"5029ef8a-c6f0-43e4-b25c-d2f695020357","Type":"ContainerStarted","Data":"ac5ba1bf53463fc46e6a25bb9c273f83a7335dbd8e97fee1d858c2876bbba02b"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.980056 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" event={"ID":"7b5de2b3-a81b-458d-a061-e6f814de897b","Type":"ContainerStarted","Data":"46a3316b07a4dc98e252341b7c7fa7fcdeb97b89d858a4b02369eb61c58550e7"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.982833 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" event={"ID":"e43c4105-3759-46ee-bde3-191e4ce3c318","Type":"ContainerStarted","Data":"41c82a8490af843ec0699e978f9cd050dfd43711c74460c1e9c12bf33ad2eaaa"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.983847 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" event={"ID":"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b","Type":"ContainerStarted","Data":"bc092b37a0e9b0a5f6ee796799328cb22bb4692adb390e57dd35bf974007fb64"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.983868 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" event={"ID":"9aaa4ac9-0635-4d2b-8d50-6ff5a52ed26b","Type":"ContainerStarted","Data":"fb5234cd37135056bc7e0ccfbab8f3e131a51d11c3b9057c52b82b4a75d659c7"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.988594 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" event={"ID":"0dc8b28c-8f88-4497-a439-2f1500cda5c2","Type":"ContainerStarted","Data":"45491fea31413f483b81e14dd807e2cebf9ac7293014741e6a7c50f53f93da27"} Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.988698 4717 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vkklc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.988748 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" podUID="ff0e32a4-5a0f-4779-a1db-b67aec04f414" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.989057 4717 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-svr2x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 18 11:51:48 crc kubenswrapper[4717]: I0218 11:51:48.989074 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" podUID="23a7650f-4199-4243-8ca1-07a4d4a8c8b4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.052611 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.053003 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.552987745 +0000 UTC m=+143.955089071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.058791 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9"] Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.155524 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.155844 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.65581387 +0000 UTC m=+144.057915186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.156361 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.161487 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.661466921 +0000 UTC m=+144.063568237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.268228 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.268573 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.768555746 +0000 UTC m=+144.170657062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.286475 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-2xjnq" podStartSLOduration=123.286455165 podStartE2EDuration="2m3.286455165s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:49.282906324 +0000 UTC m=+143.685007650" watchObservedRunningTime="2026-02-18 11:51:49.286455165 +0000 UTC m=+143.688556481" Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.316265 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4"] Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.372298 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.372594 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.872582455 +0000 UTC m=+144.274683771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.387233 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-sh4qc"] Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.483358 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.483504 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.983474729 +0000 UTC m=+144.385576045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.483995 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.484491 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:49.984473867 +0000 UTC m=+144.386575183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.542446 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2"] Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.585360 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.585683 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.085668565 +0000 UTC m=+144.487769881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.688665 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.689079 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.189062136 +0000 UTC m=+144.591163452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.709691 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zlwvm"] Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.724136 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj"] Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.789704 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.790466 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.290434729 +0000 UTC m=+144.692536055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.790683 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.790956 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k"] Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.791160 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.291146029 +0000 UTC m=+144.693247355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.852583 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" podStartSLOduration=123.852562906 podStartE2EDuration="2m3.852562906s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:49.820091683 +0000 UTC m=+144.222192999" watchObservedRunningTime="2026-02-18 11:51:49.852562906 +0000 UTC m=+144.254664232" Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.892309 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.892696 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.392678827 +0000 UTC m=+144.794780143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.924175 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4gqhf"] Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.973201 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn"] Feb 18 11:51:49 crc kubenswrapper[4717]: I0218 11:51:49.993792 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:49 crc kubenswrapper[4717]: E0218 11:51:49.994244 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.494224325 +0000 UTC m=+144.896325641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.013237 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" event={"ID":"0546320f-3929-4452-a505-bdbb872741ad","Type":"ContainerStarted","Data":"cbec9919845f82d58e32e884c90738e19b1943864219b88044296e3fa3f815ac"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.031882 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" event={"ID":"60cce315-538e-4cd3-9fcc-d9aeab94b60d","Type":"ContainerStarted","Data":"c62dedce4b8c64e76713cdebdcf16948061cf684120d6e2750402a3192612c6d"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.040317 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" event={"ID":"41f921ae-dc74-4acf-a699-a5a5e574224d","Type":"ContainerStarted","Data":"3512a61d5a938e565e39ccc446e35f0f3cf4db2764a55a11ee0421cf65295399"} Feb 18 11:51:50 crc kubenswrapper[4717]: W0218 11:51:50.043615 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ebf6ad_2b3d_4d30_885f_d8060245af0c.slice/crio-882811f298b27d58d5cb127651c569ac893144407f9efef8cb111325f6b4c065 WatchSource:0}: Error finding container 882811f298b27d58d5cb127651c569ac893144407f9efef8cb111325f6b4c065: Status 404 returned error can't find the container with id 882811f298b27d58d5cb127651c569ac893144407f9efef8cb111325f6b4c065 Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.045899 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" event={"ID":"e7935a1e-ca2c-4dcb-87cf-c0269819a682","Type":"ContainerStarted","Data":"b1ec2086ccf334196b5a58b07b8328900ae3e3c10ea011c254c4d06a85e3cac8"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.059623 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" event={"ID":"76d81a79-d7c1-427d-a8e2-11a62eee22cb","Type":"ContainerStarted","Data":"507e0760c62f38f32fa48b7c062cb59579a8d138d7e33bd29a6c6d389a137490"} Feb 18 11:51:50 crc kubenswrapper[4717]: W0218 11:51:50.067543 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc53aaac1_4a8c_439e_8d51_60054a95ed11.slice/crio-3c5372c11f1ff44ad82614cb80fa5c3bbb6ad15b0455dcc0f0e77b9525a1e8fb WatchSource:0}: Error finding container 3c5372c11f1ff44ad82614cb80fa5c3bbb6ad15b0455dcc0f0e77b9525a1e8fb: Status 404 returned error can't find the container with id 3c5372c11f1ff44ad82614cb80fa5c3bbb6ad15b0455dcc0f0e77b9525a1e8fb Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.092540 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mk6cn" event={"ID":"3c042f74-11a5-46a9-bc05-6b3278428e36","Type":"ContainerStarted","Data":"104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.096789 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.097005 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.596976977 +0000 UTC m=+144.999078293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.097140 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.097621 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.597612245 +0000 UTC m=+144.999713561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.111093 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" event={"ID":"cc067d60-aa98-4651-aad2-4d9dd7ee2683","Type":"ContainerStarted","Data":"cae743f4707acaeb537b27f90adf11719c79ae4c5af3e250794fff5fde0797ea"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.111153 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" event={"ID":"cc067d60-aa98-4651-aad2-4d9dd7ee2683","Type":"ContainerStarted","Data":"3e51b0516231941abaf7b5cff1965ebf99cfa5c2768766c8998878127996a08a"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.123992 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" event={"ID":"e43c4105-3759-46ee-bde3-191e4ce3c318","Type":"ContainerStarted","Data":"25298d68b36cef0448618910fe95925456dc0b847c292ce2d9e56b3beb0b44e5"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.135184 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dnv67" event={"ID":"78e76a00-064d-419f-bc39-a6e0d81e3176","Type":"ContainerStarted","Data":"10724d36f2d08d62a990001da35fbb8f3762cbcd80fbb3a3c447814af064efe0"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.165192 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk"] Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.171961 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zss8p" event={"ID":"cdaadf82-1fa9-426e-998f-096124387143","Type":"ContainerStarted","Data":"e4f397d307f0d4320f08bbcea9c9b9816debb6bb3f0d2671a79b034daf19f5e3"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.172015 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zss8p" event={"ID":"cdaadf82-1fa9-426e-998f-096124387143","Type":"ContainerStarted","Data":"0851f0a54568320ccac6ec0c47af3ea3d8b5f5bb4ec0e971b199e47c6858a523"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.202950 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.203430 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.703392664 +0000 UTC m=+145.105493980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.225503 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf"] Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.243084 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" event={"ID":"9c48e9b8-f48a-48d9-921a-8274c0cb430a","Type":"ContainerStarted","Data":"d0698cd0ae8938d1c684a38cf5d623d30f7bed232e94b4b4d71a8c0dc47cef64"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.248325 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" event={"ID":"dfcbb5c0-314c-427c-8897-987309aa9965","Type":"ContainerStarted","Data":"2949dacee059f564fa5e91bb9082b5642ace5210d79dcd6348392ad25dcf3dd3"} Feb 18 11:51:50 crc kubenswrapper[4717]: W0218 11:51:50.251765 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fd93e51_d24d_4242_9224_372c409de81e.slice/crio-e5529e1d88ddca368d59401393651a2f8f353529a3c50a9563d2b23da2004c0d WatchSource:0}: Error finding container e5529e1d88ddca368d59401393651a2f8f353529a3c50a9563d2b23da2004c0d: Status 404 returned error can't find the container with id e5529e1d88ddca368d59401393651a2f8f353529a3c50a9563d2b23da2004c0d Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.252454 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-z28q6"] Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.300809 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtw2w"] Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.300800 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-96gdl" podStartSLOduration=124.300777994 podStartE2EDuration="2m4.300777994s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.280297481 +0000 UTC m=+144.682398797" watchObservedRunningTime="2026-02-18 11:51:50.300777994 +0000 UTC m=+144.702879310" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.303571 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" event={"ID":"1f172f40-f7d9-49a1-acf0-b2596b2c3bde","Type":"ContainerStarted","Data":"84c0ac6c9866bce45abef2939009e0df175ba08ea88d6d54cc962e48fe4bd9c8"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.311462 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.311547 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s9dzs" event={"ID":"0e8734b5-4294-4091-b377-680aa4178a19","Type":"ContainerStarted","Data":"c47dc9e90833c72dc9416a34e9bdcd8410c3bac15c46e6dcae04e7095f86bcc0"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.312941 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-s9dzs" Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.357469 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.857451076 +0000 UTC m=+145.259552392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.361788 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7h5xf" podStartSLOduration=124.361770328 podStartE2EDuration="2m4.361770328s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.353845233 +0000 UTC m=+144.755946549" watchObservedRunningTime="2026-02-18 11:51:50.361770328 +0000 UTC m=+144.763871644" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.362501 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.362580 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.367278 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" event={"ID":"5029ef8a-c6f0-43e4-b25c-d2f695020357","Type":"ContainerStarted","Data":"08da79712b6e95b1df4317d663e8babaf540d8759af827aa278ccdcaf89354ef"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.388711 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" podStartSLOduration=124.388687324 podStartE2EDuration="2m4.388687324s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.382158098 +0000 UTC m=+144.784259424" watchObservedRunningTime="2026-02-18 11:51:50.388687324 +0000 UTC m=+144.790788640" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.389709 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.399100 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.399155 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.402703 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" event={"ID":"0c2e253c-448f-448b-8419-b898112f632c","Type":"ContainerStarted","Data":"fb27b3f80d1bbd4fde9687710c2babbb18dcbaf4224c2b102047860b3a9cc9f6"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.444819 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" event={"ID":"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d","Type":"ContainerStarted","Data":"256ed8239e68c287a256fe688556c0b7b4cd8e1e1a25137648e707f04bbce9b3"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.455890 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.457181 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:50.957152031 +0000 UTC m=+145.359253377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: W0218 11:51:50.460101 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016f064e_8db6_41ed_a2af_2d9ea9169703.slice/crio-0b666ea7decf42cecfdb7fbd86d95219da1eecd3d200fcfe91b7a8d381330899 WatchSource:0}: Error finding container 0b666ea7decf42cecfdb7fbd86d95219da1eecd3d200fcfe91b7a8d381330899: Status 404 returned error can't find the container with id 0b666ea7decf42cecfdb7fbd86d95219da1eecd3d200fcfe91b7a8d381330899 Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.485707 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" event={"ID":"518edf1a-e4f5-450a-90ff-151dc3106649","Type":"ContainerStarted","Data":"92d5f1e703a5f20f348cebecf24e24b47685eb9300ea3e0c3985a525a181ab8d"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.498228 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-kzbdk" podStartSLOduration=124.498204249 podStartE2EDuration="2m4.498204249s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.495201583 +0000 UTC m=+144.897302899" watchObservedRunningTime="2026-02-18 11:51:50.498204249 +0000 UTC m=+144.900305565" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.517207 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zlwvm" event={"ID":"fc2d4b81-f76c-46fb-bb47-d575784e849b","Type":"ContainerStarted","Data":"ca69acdbf2d3e883bd990d9d9881e989acb8c42e581ca85e074c68f99310d527"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.533512 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" event={"ID":"4d5c57e4-bfdb-46b7-b58a-8c71f90c3321","Type":"ContainerStarted","Data":"e77164ebd0cebd6a9bd50d39e11447f45c3409f2342ddad09713997dcff8a661"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.555340 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-mk6cn" podStartSLOduration=124.555323793 podStartE2EDuration="2m4.555323793s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.55381005 +0000 UTC m=+144.955911386" watchObservedRunningTime="2026-02-18 11:51:50.555323793 +0000 UTC m=+144.957425109" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.559050 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.559496 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.059483712 +0000 UTC m=+145.461585028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.602059 4717 generic.go:334] "Generic (PLEG): container finished" podID="6c88dca8-b239-4d98-b56a-3df7b296e4e7" containerID="63c9087b86ae26aca47a755f0645f8f59c2eed73048ef9caa6a7b7632668513a" exitCode=0 Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.602424 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" event={"ID":"6c88dca8-b239-4d98-b56a-3df7b296e4e7","Type":"ContainerDied","Data":"63c9087b86ae26aca47a755f0645f8f59c2eed73048ef9caa6a7b7632668513a"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.617127 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k2lck" event={"ID":"e856c873-7b06-4491-a886-800c11b0ce7a","Type":"ContainerStarted","Data":"f5f0321174c84ac8d477b50950d3fe2424d6cc1c50de9aec300c62fbafbff248"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.638968 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nj9hx" event={"ID":"2f579c52-f579-4acb-824a-46d88361ce98","Type":"ContainerStarted","Data":"2f03a76fa1ae6035b3c3494ee9a83f4595f04280cb836909333137bbb4ef4256"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.639765 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-h5czz" podStartSLOduration=124.639743994 podStartE2EDuration="2m4.639743994s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.596070062 +0000 UTC m=+144.998171398" watchObservedRunningTime="2026-02-18 11:51:50.639743994 +0000 UTC m=+145.041845310" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.640022 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.650556 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" event={"ID":"384180d0-d0ee-41ed-bf82-b19b416e5972","Type":"ContainerStarted","Data":"3d3eff37c2ba89289bcd0f7457f068e688646e9d683e0fa9827e62f9430720f7"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.650677 4717 patch_prober.go:28] interesting pod/console-operator-58897d9998-nj9hx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.650705 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nj9hx" podUID="2f579c52-f579-4acb-824a-46d88361ce98" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.659806 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.659911 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.159891117 +0000 UTC m=+145.561992433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.659994 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.660413 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.160402962 +0000 UTC m=+145.562504278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.669810 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" event={"ID":"c9a6347e-94c1-41c6-829a-73fccd79a3ea","Type":"ContainerStarted","Data":"eeed8e2792e3a2faaa8163ea822754e47d3ef1af080f0ea46d145b1665fc3d69"} Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.670001 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-zss8p" podStartSLOduration=5.669983534 podStartE2EDuration="5.669983534s" podCreationTimestamp="2026-02-18 11:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.633503237 +0000 UTC m=+145.035604553" watchObservedRunningTime="2026-02-18 11:51:50.669983534 +0000 UTC m=+145.072084850" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.670306 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-s9dzs" podStartSLOduration=124.670301553 podStartE2EDuration="2m4.670301553s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.662344867 +0000 UTC m=+145.064446183" watchObservedRunningTime="2026-02-18 11:51:50.670301553 +0000 UTC m=+145.072402869" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.686598 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.705885 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qrvwl" podStartSLOduration=124.705868285 podStartE2EDuration="2m4.705868285s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.704071464 +0000 UTC m=+145.106172790" watchObservedRunningTime="2026-02-18 11:51:50.705868285 +0000 UTC m=+145.107969601" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.728963 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-svmld" podStartSLOduration=124.728948631 podStartE2EDuration="2m4.728948631s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.727370216 +0000 UTC m=+145.129471532" watchObservedRunningTime="2026-02-18 11:51:50.728948631 +0000 UTC m=+145.131049947" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.761511 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.762957 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.262891077 +0000 UTC m=+145.664992433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.772537 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dnv67" podStartSLOduration=124.77251495 podStartE2EDuration="2m4.77251495s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.767020914 +0000 UTC m=+145.169122240" watchObservedRunningTime="2026-02-18 11:51:50.77251495 +0000 UTC m=+145.174616266" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.828606 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nj9hx" podStartSLOduration=124.828587565 podStartE2EDuration="2m4.828587565s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.801133234 +0000 UTC m=+145.203234550" watchObservedRunningTime="2026-02-18 11:51:50.828587565 +0000 UTC m=+145.230688881" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.863355 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.863769 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.363756306 +0000 UTC m=+145.765857622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.875214 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-k2lck" podStartSLOduration=5.875199281 podStartE2EDuration="5.875199281s" podCreationTimestamp="2026-02-18 11:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:50.874747138 +0000 UTC m=+145.276848464" watchObservedRunningTime="2026-02-18 11:51:50.875199281 +0000 UTC m=+145.277300597" Feb 18 11:51:50 crc kubenswrapper[4717]: I0218 11:51:50.971297 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:50 crc kubenswrapper[4717]: E0218 11:51:50.971820 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.471793428 +0000 UTC m=+145.873894754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.052056 4717 csr.go:261] certificate signing request csr-wj2bj is approved, waiting to be issued Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.074013 4717 csr.go:257] certificate signing request csr-wj2bj is issued Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.075523 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:51 crc kubenswrapper[4717]: E0218 11:51:51.075802 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.575786716 +0000 UTC m=+145.977888042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.178505 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:51 crc kubenswrapper[4717]: E0218 11:51:51.178963 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.67894525 +0000 UTC m=+146.081046566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.280850 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:51 crc kubenswrapper[4717]: E0218 11:51:51.281110 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.781098845 +0000 UTC m=+146.183200161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.383753 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:51 crc kubenswrapper[4717]: E0218 11:51:51.384120 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.884105665 +0000 UTC m=+146.286206971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.409727 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:51:51 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:51:51 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:51:51 crc kubenswrapper[4717]: healthz check failed Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.409779 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.487775 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:51 crc kubenswrapper[4717]: E0218 11:51:51.488441 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:51.988428402 +0000 UTC m=+146.390529708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.591164 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:51 crc kubenswrapper[4717]: E0218 11:51:51.591947 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.091932706 +0000 UTC m=+146.494034022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.691608 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" event={"ID":"0c2e253c-448f-448b-8419-b898112f632c","Type":"ContainerStarted","Data":"f152317a5af14b200a958b107bb5019473a6a9c8b153a51f0a9ff35d59d35a92"} Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.692937 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:51 crc kubenswrapper[4717]: E0218 11:51:51.693468 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.193455673 +0000 UTC m=+146.595556989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.725198 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" event={"ID":"624afd8c-8a76-42e8-b83d-d122f22464f9","Type":"ContainerStarted","Data":"e82feb0a4bfb18ccc03a788eaeef20e03677454f708ef3bcbd681123b69bd556"} Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.725450 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" event={"ID":"624afd8c-8a76-42e8-b83d-d122f22464f9","Type":"ContainerStarted","Data":"f13ce2538013baa38c8f5af9ff8b9aee17357e282212839f04f0c40baab40cf5"} Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.726035 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.730201 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" podStartSLOduration=125.730187568 podStartE2EDuration="2m5.730187568s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:51.727678617 +0000 UTC m=+146.129779933" watchObservedRunningTime="2026-02-18 11:51:51.730187568 +0000 UTC m=+146.132288884" Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.743305 4717 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-972tf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.743546 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" podUID="624afd8c-8a76-42e8-b83d-d122f22464f9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.744764 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" event={"ID":"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91","Type":"ContainerDied","Data":"afa1a809f7e99cde30b56fc4002284d22112cd6c981b94a28984fdfc3c8b45b8"} Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.745646 4717 generic.go:334] "Generic (PLEG): container finished" podID="0f2b6b73-0ea9-4de9-9bc4-e76322e04a91" containerID="afa1a809f7e99cde30b56fc4002284d22112cd6c981b94a28984fdfc3c8b45b8" exitCode=0 Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.762020 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" event={"ID":"1f172f40-f7d9-49a1-acf0-b2596b2c3bde","Type":"ContainerStarted","Data":"af06d5fea17039b9b3c1648851c6fce3e30085b5db17b3cf8675df196ff1f9fe"} Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.773174 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" podStartSLOduration=125.77315529 podStartE2EDuration="2m5.77315529s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:51.77209936 +0000 UTC m=+146.174200676" watchObservedRunningTime="2026-02-18 11:51:51.77315529 +0000 UTC m=+146.175256606" Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.791720 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zlwvm" event={"ID":"fc2d4b81-f76c-46fb-bb47-d575784e849b","Type":"ContainerStarted","Data":"85530ace8eab1708b9a958ba7c8b2a29ad1d36433584b9ba518b0bc1e4b9df9b"} Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.798036 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:51 crc kubenswrapper[4717]: E0218 11:51:51.798508 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.29848625 +0000 UTC m=+146.700587566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.857982 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk" event={"ID":"3fd93e51-d24d-4242-9224-372c409de81e","Type":"ContainerStarted","Data":"e27f9d3d65e5e75a89003b748f35aec60d2a1dd2e2f7ad2583ab54e8e48ed366"} Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.858432 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk" event={"ID":"3fd93e51-d24d-4242-9224-372c409de81e","Type":"ContainerStarted","Data":"e5529e1d88ddca368d59401393651a2f8f353529a3c50a9563d2b23da2004c0d"} Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.883556 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bs5zz" podStartSLOduration=125.883534449 podStartE2EDuration="2m5.883534449s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:51.872188797 +0000 UTC m=+146.274290193" watchObservedRunningTime="2026-02-18 11:51:51.883534449 +0000 UTC m=+146.285635765" Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.900433 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:51 crc kubenswrapper[4717]: E0218 11:51:51.901700 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.401686796 +0000 UTC m=+146.803788112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.975939 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" event={"ID":"dfcbb5c0-314c-427c-8897-987309aa9965","Type":"ContainerStarted","Data":"a826ae1478f89657ed28bcdb21d3054c34c1307b8331a610d88197fdc6f6be2b"} Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.993672 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" event={"ID":"54297048-fbeb-4aca-941a-90ab437b2068","Type":"ContainerStarted","Data":"1f13636a93077a8d09288a82498aeba2ecb9aecf0938b6c0ee079a2051f889be"} Feb 18 11:51:51 crc kubenswrapper[4717]: I0218 11:51:51.993729 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" event={"ID":"54297048-fbeb-4aca-941a-90ab437b2068","Type":"ContainerStarted","Data":"5bff58b069dd6d9a57dbf8181b345dca5478930b864b14f7af70d6b6d70b77fe"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.004680 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.005147 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.505128857 +0000 UTC m=+146.907230183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.014722 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" event={"ID":"e7935a1e-ca2c-4dcb-87cf-c0269819a682","Type":"ContainerStarted","Data":"e74f3283e7e6bc3ec0135cc18c33fd256ca3963f0ea3d554e228c5ad4c874b21"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.050036 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" event={"ID":"a9e5d6c2-0001-4e35-9ccd-f9096cf2431d","Type":"ContainerStarted","Data":"0dda635a74a7f1d1f7256447924db5d4a0e29dc122445b80e118aa81d33904ac"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.056511 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" podStartSLOduration=126.056491897 podStartE2EDuration="2m6.056491897s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.014342229 +0000 UTC m=+146.416443555" watchObservedRunningTime="2026-02-18 11:51:52.056491897 +0000 UTC m=+146.458593213" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.056634 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-z28q6" podStartSLOduration=126.056623421 podStartE2EDuration="2m6.056623421s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.055118408 +0000 UTC m=+146.457219734" watchObservedRunningTime="2026-02-18 11:51:52.056623421 +0000 UTC m=+146.458724747" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.059120 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nj9hx" event={"ID":"2f579c52-f579-4acb-824a-46d88361ce98","Type":"ContainerStarted","Data":"b913abf264e6376fa26d74059f066a6a6538b8dc81132f04b42aca21422ee6a0"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.060767 4717 patch_prober.go:28] interesting pod/console-operator-58897d9998-nj9hx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.060809 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nj9hx" podUID="2f579c52-f579-4acb-824a-46d88361ce98" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.082089 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" event={"ID":"4d5c57e4-bfdb-46b7-b58a-8c71f90c3321","Type":"ContainerStarted","Data":"256ae5563d207e304fc39570ab4f3b577471a0e69bcae488f1935974e313b28e"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.082739 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.082799 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 11:46:51 +0000 UTC, rotation deadline is 2026-12-28 21:17:25.046019345 +0000 UTC Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.082817 4717 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7521h25m32.963204329s for next certificate rotation Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.108039 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.110084 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.610066941 +0000 UTC m=+147.012168257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.115401 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" event={"ID":"76d81a79-d7c1-427d-a8e2-11a62eee22cb","Type":"ContainerStarted","Data":"bfb930a1717b62dbc3ad6e2273102dc73237fd27d348e66875165eff521f562d"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.116177 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.122948 4717 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vjmm4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.123063 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" podUID="76d81a79-d7c1-427d-a8e2-11a62eee22cb" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.124684 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" event={"ID":"7b5de2b3-a81b-458d-a061-e6f814de897b","Type":"ContainerStarted","Data":"9fe5fed58e3206364a55b002cbc48d69a20d7afae530a3311ff575e9d6071890"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.124721 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" event={"ID":"7b5de2b3-a81b-458d-a061-e6f814de897b","Type":"ContainerStarted","Data":"fb465753354df61ee40460e4b4bde11a2971f678036634e30c600b1d4f9fb4d7"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.126928 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" event={"ID":"23ebf6ad-2b3d-4d30-885f-d8060245af0c","Type":"ContainerStarted","Data":"882811f298b27d58d5cb127651c569ac893144407f9efef8cb111325f6b4c065"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.137438 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-sh4qc" podStartSLOduration=126.137418779 podStartE2EDuration="2m6.137418779s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.134138195 +0000 UTC m=+146.536239521" watchObservedRunningTime="2026-02-18 11:51:52.137418779 +0000 UTC m=+146.539520095" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.162144 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" event={"ID":"9c48e9b8-f48a-48d9-921a-8274c0cb430a","Type":"ContainerStarted","Data":"e645560113c0f968babe087afba2094117a5e0728575be5e012eda6efe45d879"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.215321 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.216185 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.716170679 +0000 UTC m=+147.118271995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.241919 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k2lck" event={"ID":"e856c873-7b06-4491-a886-800c11b0ce7a","Type":"ContainerStarted","Data":"4a3479e4ebb7d3c1e8a6f152dc5c9b7b181a7fad663573e62b5456c94c6f68f5"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.268016 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" event={"ID":"016f064e-8db6-41ed-a2af-2d9ea9169703","Type":"ContainerStarted","Data":"0b666ea7decf42cecfdb7fbd86d95219da1eecd3d200fcfe91b7a8d381330899"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.268221 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xjlqq" podStartSLOduration=126.268204748 podStartE2EDuration="2m6.268204748s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.265649796 +0000 UTC m=+146.667751112" watchObservedRunningTime="2026-02-18 11:51:52.268204748 +0000 UTC m=+146.670306064" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.268845 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.279368 4717 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qtw2w container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.279419 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" podUID="016f064e-8db6-41ed-a2af-2d9ea9169703" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.297758 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" event={"ID":"518edf1a-e4f5-450a-90ff-151dc3106649","Type":"ContainerStarted","Data":"7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.298696 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.317169 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.319757 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.819728314 +0000 UTC m=+147.221829630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.320620 4717 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rzqs7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.320644 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" podUID="518edf1a-e4f5-450a-90ff-151dc3106649" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.340052 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" event={"ID":"cc067d60-aa98-4651-aad2-4d9dd7ee2683","Type":"ContainerStarted","Data":"ac1641e13555db74caeb2f348e9f87ea541bb291fd38aefc0f7475067ca1b081"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.350549 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" event={"ID":"c53aaac1-4a8c-439e-8d51-60054a95ed11","Type":"ContainerStarted","Data":"2fffd0f4aa2f7020505b06ccd06d0a720b7ebf96832f6265f7102a685433e251"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.350598 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" event={"ID":"c53aaac1-4a8c-439e-8d51-60054a95ed11","Type":"ContainerStarted","Data":"3c5372c11f1ff44ad82614cb80fa5c3bbb6ad15b0455dcc0f0e77b9525a1e8fb"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.365288 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" podStartSLOduration=126.365271669 podStartE2EDuration="2m6.365271669s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.363512199 +0000 UTC m=+146.765613515" watchObservedRunningTime="2026-02-18 11:51:52.365271669 +0000 UTC m=+146.767372985" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.379582 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" event={"ID":"60cce315-538e-4cd3-9fcc-d9aeab94b60d","Type":"ContainerStarted","Data":"e8fab3f680a7dba35cc02e990e76bf5c426e5cdbfb9f9575ac55e1ea0cb53476"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.379864 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.395794 4717 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9h9n9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.395845 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" podUID="60cce315-538e-4cd3-9fcc-d9aeab94b60d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.402513 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:51:52 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:51:52 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:51:52 crc kubenswrapper[4717]: healthz check failed Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.402556 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.420107 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.420236 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.920219952 +0000 UTC m=+147.322321268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.427798 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.431000 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:52.930978828 +0000 UTC m=+147.333080144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.456635 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" event={"ID":"384180d0-d0ee-41ed-bf82-b19b416e5972","Type":"ContainerStarted","Data":"0cb8089bc20ba9735540116c76d41f41029ecf5e57469174bc75fb3a4096e615"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.457282 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" podStartSLOduration=126.457247755 podStartE2EDuration="2m6.457247755s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.457135912 +0000 UTC m=+146.859237238" watchObservedRunningTime="2026-02-18 11:51:52.457247755 +0000 UTC m=+146.859349081" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.473680 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-br9vq" podStartSLOduration=126.473656292 podStartE2EDuration="2m6.473656292s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.40223039 +0000 UTC m=+146.804331706" watchObservedRunningTime="2026-02-18 11:51:52.473656292 +0000 UTC m=+146.875757608" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.489071 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" event={"ID":"6ef7ad6b-12b9-46b2-96e4-f4634170ab20","Type":"ContainerStarted","Data":"cb387f9244498b265e89801bda700da7c7b3075ef0b33f3e43d40fb65df28328"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.520865 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" event={"ID":"0546320f-3929-4452-a505-bdbb872741ad","Type":"ContainerStarted","Data":"a39644a6977b42e1f366998bcca2ddb604f320b05b5a15025f0439683ca8f96e"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.523730 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dj42d" podStartSLOduration=126.523719766 podStartE2EDuration="2m6.523719766s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.519983329 +0000 UTC m=+146.922084655" watchObservedRunningTime="2026-02-18 11:51:52.523719766 +0000 UTC m=+146.925821082" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.528871 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.530158 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.030137758 +0000 UTC m=+147.432239074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.544063 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" event={"ID":"41932c5e-1add-49b1-8876-43ee2e9a4a91","Type":"ContainerStarted","Data":"33feb0c7f250cb65a3c162d61e8573b0d7beeb0a14216421d26f71f5e0ab35f6"} Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.545860 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.545917 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.604030 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7fgq4" podStartSLOduration=126.604012679 podStartE2EDuration="2m6.604012679s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.602804335 +0000 UTC m=+147.004905651" watchObservedRunningTime="2026-02-18 11:51:52.604012679 +0000 UTC m=+147.006113995" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.633228 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.635214 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.135199006 +0000 UTC m=+147.537300402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.736402 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.738108 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" podStartSLOduration=126.738091452 podStartE2EDuration="2m6.738091452s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.735706935 +0000 UTC m=+147.137808261" watchObservedRunningTime="2026-02-18 11:51:52.738091452 +0000 UTC m=+147.140192768" Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.738370 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.238330389 +0000 UTC m=+147.640431705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.739184 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" podStartSLOduration=126.739178113 podStartE2EDuration="2m6.739178113s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.682316186 +0000 UTC m=+147.084417502" watchObservedRunningTime="2026-02-18 11:51:52.739178113 +0000 UTC m=+147.141279429" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.772360 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-tph6g" podStartSLOduration=126.772341496 podStartE2EDuration="2m6.772341496s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.771690158 +0000 UTC m=+147.173791504" watchObservedRunningTime="2026-02-18 11:51:52.772341496 +0000 UTC m=+147.174442832" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.847824 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.848205 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.348192904 +0000 UTC m=+147.750294220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.850016 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7stbl" podStartSLOduration=126.849989785 podStartE2EDuration="2m6.849989785s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.821048462 +0000 UTC m=+147.223149778" watchObservedRunningTime="2026-02-18 11:51:52.849989785 +0000 UTC m=+147.252091121" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.851770 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jmlmn" podStartSLOduration=126.851760765 podStartE2EDuration="2m6.851760765s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.84769768 +0000 UTC m=+147.249798996" watchObservedRunningTime="2026-02-18 11:51:52.851760765 +0000 UTC m=+147.253862081" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.890047 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" podStartSLOduration=126.890017783 podStartE2EDuration="2m6.890017783s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.884030303 +0000 UTC m=+147.286131629" watchObservedRunningTime="2026-02-18 11:51:52.890017783 +0000 UTC m=+147.292119099" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.941529 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sgvs4" podStartSLOduration=126.941513148 podStartE2EDuration="2m6.941513148s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:52.941095366 +0000 UTC m=+147.343196682" watchObservedRunningTime="2026-02-18 11:51:52.941513148 +0000 UTC m=+147.343614464" Feb 18 11:51:52 crc kubenswrapper[4717]: I0218 11:51:52.956906 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:52 crc kubenswrapper[4717]: E0218 11:51:52.957362 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.457329048 +0000 UTC m=+147.859430364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.057974 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.058368 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.558356761 +0000 UTC m=+147.960458077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.158990 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.159406 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.659386324 +0000 UTC m=+148.061487640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.262612 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.263005 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.762988621 +0000 UTC m=+148.165089937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.363404 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.363763 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.863744226 +0000 UTC m=+148.265845542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.401916 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:51:53 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:51:53 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:51:53 crc kubenswrapper[4717]: healthz check failed Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.402331 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.467425 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.467803 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:53.967790656 +0000 UTC m=+148.369891972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.561303 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" event={"ID":"016f064e-8db6-41ed-a2af-2d9ea9169703","Type":"ContainerStarted","Data":"d4f7be783138ab978151c2f4ca1fb2c7758525c7c9847fa427738e3680fa34e9"} Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.562112 4717 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qtw2w container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.562156 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" podUID="016f064e-8db6-41ed-a2af-2d9ea9169703" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.562790 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk" event={"ID":"3fd93e51-d24d-4242-9224-372c409de81e","Type":"ContainerStarted","Data":"f8433af10eab13a43d4befc2c5ada12227d053d3070a855a6ba48d1e8f161cec"} Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.564951 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" event={"ID":"23ebf6ad-2b3d-4d30-885f-d8060245af0c","Type":"ContainerStarted","Data":"ce47f4efa0783e6516f2421138c3e1213be2d437df63c5557c70fa1663d88b3b"} Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.564982 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" event={"ID":"23ebf6ad-2b3d-4d30-885f-d8060245af0c","Type":"ContainerStarted","Data":"46a91d7ef4183d10436f63bd4568c872c525ed4841c819f7291d4c3b8935b27f"} Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.567824 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.567956 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.067929464 +0000 UTC m=+148.470030790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.568171 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.568500 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.06849195 +0000 UTC m=+148.470593266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.569349 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" event={"ID":"4d5c57e4-bfdb-46b7-b58a-8c71f90c3321","Type":"ContainerStarted","Data":"9555fc3449f9b6191d985eb2c88d3253964f61b53921ddffb92d6ef8096c959d"} Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.571831 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" event={"ID":"c9a6347e-94c1-41c6-829a-73fccd79a3ea","Type":"ContainerStarted","Data":"1417facf7c42f6cbdc2b9bc4d09a5ebc6b896bfb95164d1f7fab4802ac6eab2a"} Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.574171 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" event={"ID":"0f2b6b73-0ea9-4de9-9bc4-e76322e04a91","Type":"ContainerStarted","Data":"be916ad18a70d3f2d48b98d9f846e2c671e6a152e8d7998da208fc4008490e4f"} Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.576244 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" event={"ID":"6c88dca8-b239-4d98-b56a-3df7b296e4e7","Type":"ContainerStarted","Data":"37c9f2ddd25d708b7c56263a6584f074e96d2ef27969f7ee79e9d8839c60f23a"} Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.578243 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zlwvm" event={"ID":"fc2d4b81-f76c-46fb-bb47-d575784e849b","Type":"ContainerStarted","Data":"19330fdde1d8d3c54a527a88185cb2d37aec788e52b2161d149b3f8d73b905c6"} Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.579106 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zlwvm" Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.581078 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dt9kj" event={"ID":"dfcbb5c0-314c-427c-8897-987309aa9965","Type":"ContainerStarted","Data":"666277ee87e4e1704db3a1fef5b2ffa0aaeb54a5072ce73a81ec7691e8de19e5"} Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.581824 4717 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-972tf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.581869 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" podUID="624afd8c-8a76-42e8-b83d-d122f22464f9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.582326 4717 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vjmm4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.582373 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" podUID="76d81a79-d7c1-427d-a8e2-11a62eee22cb" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.588147 4717 patch_prober.go:28] interesting pod/console-operator-58897d9998-nj9hx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.588220 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nj9hx" podUID="2f579c52-f579-4acb-824a-46d88361ce98" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.669442 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.669689 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.169647776 +0000 UTC m=+148.571749092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.669803 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.671152 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.171144199 +0000 UTC m=+148.573245505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.695551 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljzk" podStartSLOduration=127.695533813 podStartE2EDuration="2m7.695533813s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:53.69158364 +0000 UTC m=+148.093684956" watchObservedRunningTime="2026-02-18 11:51:53.695533813 +0000 UTC m=+148.097635139" Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.697434 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" podStartSLOduration=127.697424757 podStartE2EDuration="2m7.697424757s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:53.016711676 +0000 UTC m=+147.418812992" watchObservedRunningTime="2026-02-18 11:51:53.697424757 +0000 UTC m=+148.099526073" Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.770788 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.770943 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.270920697 +0000 UTC m=+148.673022013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.771947 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.774306 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.274290303 +0000 UTC m=+148.676391679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.847604 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9h9n9" Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.877000 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.877402 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.377387205 +0000 UTC m=+148.779488521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.930942 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" podStartSLOduration=127.930924567 podStartE2EDuration="2m7.930924567s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:53.916383054 +0000 UTC m=+148.318484360" watchObservedRunningTime="2026-02-18 11:51:53.930924567 +0000 UTC m=+148.333025873" Feb 18 11:51:53 crc kubenswrapper[4717]: I0218 11:51:53.981310 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:53 crc kubenswrapper[4717]: E0218 11:51:53.982121 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.482089323 +0000 UTC m=+148.884190639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.082920 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.083398 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.583374833 +0000 UTC m=+148.985476169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.148912 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" podStartSLOduration=128.148891767 podStartE2EDuration="2m8.148891767s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:54.115366903 +0000 UTC m=+148.517468209" watchObservedRunningTime="2026-02-18 11:51:54.148891767 +0000 UTC m=+148.550993073" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.184511 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.184913 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.6848837 +0000 UTC m=+149.086985016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.211293 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.216081 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4gqhf" podStartSLOduration=128.216064317 podStartE2EDuration="2m8.216064317s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:54.150936355 +0000 UTC m=+148.553037671" watchObservedRunningTime="2026-02-18 11:51:54.216064317 +0000 UTC m=+148.618165633" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.282909 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zlwvm" podStartSLOduration=9.282889458 podStartE2EDuration="9.282889458s" podCreationTimestamp="2026-02-18 11:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:54.216561001 +0000 UTC m=+148.618662317" watchObservedRunningTime="2026-02-18 11:51:54.282889458 +0000 UTC m=+148.684990774" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.285346 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.285636 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.785610255 +0000 UTC m=+149.187711571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.285744 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.286050 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.786037407 +0000 UTC m=+149.188138723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.387355 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.387584 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.887541924 +0000 UTC m=+149.289643240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.387912 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.388226 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.888216733 +0000 UTC m=+149.290318049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.393323 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:51:54 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:51:54 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:51:54 crc kubenswrapper[4717]: healthz check failed Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.393387 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.489411 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.489774 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:54.989757201 +0000 UTC m=+149.391858517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.582181 4717 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rzqs7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.582247 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" podUID="518edf1a-e4f5-450a-90ff-151dc3106649" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.590474 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.590812 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.090799975 +0000 UTC m=+149.492901291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.607082 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" event={"ID":"c9a6347e-94c1-41c6-829a-73fccd79a3ea","Type":"ContainerStarted","Data":"5c7de4f86afd54fd92446a8810dfde09b115941985900e5e75ba3c25081a071d"} Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.608591 4717 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qtw2w container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.608629 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" podUID="016f064e-8db6-41ed-a2af-2d9ea9169703" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.621994 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vjmm4" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.691715 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.692137 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.192117436 +0000 UTC m=+149.594218752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.792973 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.793984 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.293962913 +0000 UTC m=+149.696064259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.896609 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.896813 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.396755626 +0000 UTC m=+149.798856962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.896858 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.897925 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.898281 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.898367 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.898393 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:54 crc kubenswrapper[4717]: E0218 11:51:54.898745 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.398732603 +0000 UTC m=+149.800833919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.907658 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.908899 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.909615 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.963525 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:54 crc kubenswrapper[4717]: I0218 11:51:54.999843 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.000199 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.500180508 +0000 UTC m=+149.902281844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.062528 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.101397 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.101780 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.601767457 +0000 UTC m=+150.003868773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.148583 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.155306 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.203788 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.203984 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.703955334 +0000 UTC m=+150.106056650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.204082 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.204607 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.704599472 +0000 UTC m=+150.106700788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.305448 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.305836 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.805812061 +0000 UTC m=+150.207913377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.391939 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:51:55 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:51:55 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:51:55 crc kubenswrapper[4717]: healthz check failed Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.392296 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.406948 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.407376 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:55.907362419 +0000 UTC m=+150.309463735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.508111 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.508310 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.008236008 +0000 UTC m=+150.410337324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.508423 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.508786 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.008775493 +0000 UTC m=+150.410876809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.608137 4717 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rzqs7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.608182 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" podUID="518edf1a-e4f5-450a-90ff-151dc3106649" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.608666 4717 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-972tf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.608693 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" podUID="624afd8c-8a76-42e8-b83d-d122f22464f9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.620861 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.621227 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.12120807 +0000 UTC m=+150.523309386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.673403 4717 generic.go:334] "Generic (PLEG): container finished" podID="0c2e253c-448f-448b-8419-b898112f632c" containerID="f152317a5af14b200a958b107bb5019473a6a9c8b153a51f0a9ff35d59d35a92" exitCode=0 Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.673526 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" event={"ID":"0c2e253c-448f-448b-8419-b898112f632c","Type":"ContainerDied","Data":"f152317a5af14b200a958b107bb5019473a6a9c8b153a51f0a9ff35d59d35a92"} Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.699625 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" event={"ID":"c9a6347e-94c1-41c6-829a-73fccd79a3ea","Type":"ContainerStarted","Data":"4186962a6a43423afe368edde8fc8c0699684f0e9d2ac36e958a1c3c8632b3d4"} Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.721931 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.724305 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.224289681 +0000 UTC m=+150.626391007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.728439 4717 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-x6rkx container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.728474 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" podUID="0f2b6b73-0ea9-4de9-9bc4-e76322e04a91" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.748560 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c9n76"] Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.750025 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.824670 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.825174 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48gl6\" (UniqueName: \"kubernetes.io/projected/9250b3da-040d-4f0c-84d0-5d795bf3479d-kube-api-access-48gl6\") pod \"certified-operators-c9n76\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.825210 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-catalog-content\") pod \"certified-operators-c9n76\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.825278 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-utilities\") pod \"certified-operators-c9n76\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.825372 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.325356936 +0000 UTC m=+150.727458252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: W0218 11:51:55.831382 4717 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.831417 4717 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.918166 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6ksxg"] Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.919203 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.929815 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48gl6\" (UniqueName: \"kubernetes.io/projected/9250b3da-040d-4f0c-84d0-5d795bf3479d-kube-api-access-48gl6\") pod \"certified-operators-c9n76\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.929859 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.929891 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-utilities\") pod \"community-operators-6ksxg\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.929914 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-catalog-content\") pod \"certified-operators-c9n76\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.929956 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz58r\" (UniqueName: \"kubernetes.io/projected/e1b37906-b51d-4abf-be9c-8607a92dfa40-kube-api-access-bz58r\") pod \"community-operators-6ksxg\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.929985 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-utilities\") pod \"certified-operators-c9n76\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.930004 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-catalog-content\") pod \"community-operators-6ksxg\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:55 crc kubenswrapper[4717]: E0218 11:51:55.930578 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.430564258 +0000 UTC m=+150.832665574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.930941 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-catalog-content\") pod \"certified-operators-c9n76\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.931148 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-utilities\") pod \"certified-operators-c9n76\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.948650 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c9n76"] Feb 18 11:51:55 crc kubenswrapper[4717]: I0218 11:51:55.954585 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.030515 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.030797 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz58r\" (UniqueName: \"kubernetes.io/projected/e1b37906-b51d-4abf-be9c-8607a92dfa40-kube-api-access-bz58r\") pod \"community-operators-6ksxg\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.030849 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-catalog-content\") pod \"community-operators-6ksxg\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.030917 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-utilities\") pod \"community-operators-6ksxg\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.031148 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.531122298 +0000 UTC m=+150.933223684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.031569 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-utilities\") pod \"community-operators-6ksxg\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.031800 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-catalog-content\") pod \"community-operators-6ksxg\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.106300 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48gl6\" (UniqueName: \"kubernetes.io/projected/9250b3da-040d-4f0c-84d0-5d795bf3479d-kube-api-access-48gl6\") pod \"certified-operators-c9n76\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.133670 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.134027 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.634013994 +0000 UTC m=+151.036115300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.147110 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6ksxg"] Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.163737 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz58r\" (UniqueName: \"kubernetes.io/projected/e1b37906-b51d-4abf-be9c-8607a92dfa40-kube-api-access-bz58r\") pod \"community-operators-6ksxg\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.169324 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m2cjl"] Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.170425 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.190877 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4swdf"] Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.192629 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.224334 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m2cjl"] Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.242054 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4swdf"] Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.245562 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.245883 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-catalog-content\") pod \"certified-operators-m2cjl\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.245947 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-catalog-content\") pod \"community-operators-4swdf\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.245975 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-utilities\") pod \"community-operators-4swdf\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.246035 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8r4d\" (UniqueName: \"kubernetes.io/projected/739737ee-803a-478a-a7f2-de797ffeca2a-kube-api-access-v8r4d\") pod \"community-operators-4swdf\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.246083 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-utilities\") pod \"certified-operators-m2cjl\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.246115 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgjvb\" (UniqueName: \"kubernetes.io/projected/154937ce-02f4-41f5-a061-fb7890e7cf40-kube-api-access-tgjvb\") pod \"certified-operators-m2cjl\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.246283 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.746247676 +0000 UTC m=+151.148348992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.347234 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-catalog-content\") pod \"community-operators-4swdf\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.347304 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-utilities\") pod \"community-operators-4swdf\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.347399 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8r4d\" (UniqueName: \"kubernetes.io/projected/739737ee-803a-478a-a7f2-de797ffeca2a-kube-api-access-v8r4d\") pod \"community-operators-4swdf\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.347464 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-utilities\") pod \"certified-operators-m2cjl\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.347497 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.347539 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgjvb\" (UniqueName: \"kubernetes.io/projected/154937ce-02f4-41f5-a061-fb7890e7cf40-kube-api-access-tgjvb\") pod \"certified-operators-m2cjl\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.347573 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-catalog-content\") pod \"certified-operators-m2cjl\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.348175 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-catalog-content\") pod \"certified-operators-m2cjl\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.348690 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-catalog-content\") pod \"community-operators-4swdf\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.348952 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.848939847 +0000 UTC m=+151.251041163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.349161 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-utilities\") pod \"certified-operators-m2cjl\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.357693 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-utilities\") pod \"community-operators-4swdf\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.397196 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.400056 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8r4d\" (UniqueName: \"kubernetes.io/projected/739737ee-803a-478a-a7f2-de797ffeca2a-kube-api-access-v8r4d\") pod \"community-operators-4swdf\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.435118 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgjvb\" (UniqueName: \"kubernetes.io/projected/154937ce-02f4-41f5-a061-fb7890e7cf40-kube-api-access-tgjvb\") pod \"certified-operators-m2cjl\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.456569 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.456936 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.956914548 +0000 UTC m=+151.359015864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.457012 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.457429 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:56.957418642 +0000 UTC m=+151.359519958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.463765 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:51:56 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:51:56 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:51:56 crc kubenswrapper[4717]: healthz check failed Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.463849 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.560204 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.561430 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:57.06141041 +0000 UTC m=+151.463511726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.642699 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.668817 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.669164 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:57.169152534 +0000 UTC m=+151.571253850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.674719 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.681431 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.681522 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.769712 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.770096 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:57.270075374 +0000 UTC m=+151.672176690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.786406 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" event={"ID":"c9a6347e-94c1-41c6-829a-73fccd79a3ea","Type":"ContainerStarted","Data":"9637ea2f258bfc09bf5cbc346c6acf05cbbb1545e39f051f4e4a5e4a1b5b78e0"} Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.878792 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.879517 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:57.379478076 +0000 UTC m=+151.781579392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.886063 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-s5p7r" podStartSLOduration=11.886044793 podStartE2EDuration="11.886044793s" podCreationTimestamp="2026-02-18 11:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:51:56.846590241 +0000 UTC m=+151.248691577" watchObservedRunningTime="2026-02-18 11:51:56.886044793 +0000 UTC m=+151.288146109" Feb 18 11:51:56 crc kubenswrapper[4717]: I0218 11:51:56.981616 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:56 crc kubenswrapper[4717]: E0218 11:51:56.981873 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:57.481854108 +0000 UTC m=+151.883955424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:56 crc kubenswrapper[4717]: W0218 11:51:56.988956 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-ed4354e123d699258fa49ad6f342bb9feb20d0ab092781c01f53364553d172f9 WatchSource:0}: Error finding container ed4354e123d699258fa49ad6f342bb9feb20d0ab092781c01f53364553d172f9: Status 404 returned error can't find the container with id ed4354e123d699258fa49ad6f342bb9feb20d0ab092781c01f53364553d172f9 Feb 18 11:51:57 crc kubenswrapper[4717]: W0218 11:51:57.051162 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-020ca70bb26f85ad9bff5dede1722b86985be858e323673ae5afe74232c1a17a WatchSource:0}: Error finding container 020ca70bb26f85ad9bff5dede1722b86985be858e323673ae5afe74232c1a17a: Status 404 returned error can't find the container with id 020ca70bb26f85ad9bff5dede1722b86985be858e323673ae5afe74232c1a17a Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.084148 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:57 crc kubenswrapper[4717]: E0218 11:51:57.084745 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:57.584732173 +0000 UTC m=+151.986833489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.101895 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.101936 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.147430 4717 patch_prober.go:28] interesting pod/apiserver-76f77b778f-z4csx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]log ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]etcd ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/max-in-flight-filter ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 18 11:51:57 crc kubenswrapper[4717]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/project.openshift.io-projectcache ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/openshift.io-startinformers ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 18 11:51:57 crc kubenswrapper[4717]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 11:51:57 crc kubenswrapper[4717]: livez check failed Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.147495 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" podUID="0546320f-3929-4452-a505-bdbb872741ad" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.186349 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:57 crc kubenswrapper[4717]: E0218 11:51:57.186767 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 11:51:57.686748895 +0000 UTC m=+152.088850211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.191779 4717 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.223431 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.224037 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.239782 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.240102 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.240992 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.243133 4717 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T11:51:57.191794018Z","Handler":null,"Name":""} Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.295921 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:57 crc kubenswrapper[4717]: E0218 11:51:57.297918 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 11:51:57.797898836 +0000 UTC m=+152.200000152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bxl4l" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.304947 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6ksxg"] Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.327568 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.399523 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:51:57 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:51:57 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:51:57 crc kubenswrapper[4717]: healthz check failed Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.399571 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.399825 4717 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.399863 4717 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.400908 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.401077 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.401128 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.438378 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.471534 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.492284 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.502134 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.506247 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.507437 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.511921 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.564823 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4swdf"] Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.608176 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.625238 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.668106 4717 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.668541 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.670216 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.670727 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.674092 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.674138 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.705855 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n6fmv"] Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.706944 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.711105 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.725047 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6fmv"] Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.771839 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.786376 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x6rkx" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.807250 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bxl4l\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.814574 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-utilities\") pod \"redhat-marketplace-n6fmv\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.814632 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-catalog-content\") pod \"redhat-marketplace-n6fmv\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.814652 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pthzp\" (UniqueName: \"kubernetes.io/projected/0ee5d22d-8884-4563-8329-c475346f3a03-kube-api-access-pthzp\") pod \"redhat-marketplace-n6fmv\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.823611 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"882216874ce8ffb0958cdb65e9168d404ef6fba228f2a68d7375a05c20e145ef"} Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.823656 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"020ca70bb26f85ad9bff5dede1722b86985be858e323673ae5afe74232c1a17a"} Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.844095 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"005e097e6497297bf6065cb6c061bd6912edab6a29b5e72537d4518bbf18bd08"} Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.844170 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5d06a9db5aebfa246dcff5392118cfa3e98d281ed61e4dc001252e5fa4ca4608"} Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.846217 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swdf" event={"ID":"739737ee-803a-478a-a7f2-de797ffeca2a","Type":"ContainerStarted","Data":"b6ee8298f8a48b6a15c7cdb36b089071e12836759fded7522fd97386e1cc27ea"} Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.847178 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ksxg" event={"ID":"e1b37906-b51d-4abf-be9c-8607a92dfa40","Type":"ContainerStarted","Data":"96fa754b0b28f0f006c01ddfb79192fa8a2cdc74a164e42276a51dfc7b1eb982"} Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.865895 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8b639cce0d0025118431fe00d213630f5d8224579d94e5e5033ad1e1ceef2709"} Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.865937 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ed4354e123d699258fa49ad6f342bb9feb20d0ab092781c01f53364553d172f9"} Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.868307 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.896158 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6kmwp" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.914375 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.915433 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-utilities\") pod \"redhat-marketplace-n6fmv\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.915481 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-catalog-content\") pod \"redhat-marketplace-n6fmv\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.915518 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pthzp\" (UniqueName: \"kubernetes.io/projected/0ee5d22d-8884-4563-8329-c475346f3a03-kube-api-access-pthzp\") pod \"redhat-marketplace-n6fmv\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.931293 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.931926 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.933433 4717 patch_prober.go:28] interesting pod/console-f9d7485db-mk6cn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.933493 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mk6cn" podUID="3c042f74-11a5-46a9-bc05-6b3278428e36" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.948772 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-catalog-content\") pod \"redhat-marketplace-n6fmv\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.949079 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-utilities\") pod \"redhat-marketplace-n6fmv\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:57 crc kubenswrapper[4717]: I0218 11:51:57.992581 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pthzp\" (UniqueName: \"kubernetes.io/projected/0ee5d22d-8884-4563-8329-c475346f3a03-kube-api-access-pthzp\") pod \"redhat-marketplace-n6fmv\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.067487 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.076752 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.084058 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m2cjl"] Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.118402 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c2e253c-448f-448b-8419-b898112f632c-secret-volume\") pod \"0c2e253c-448f-448b-8419-b898112f632c\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.118464 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cgz2\" (UniqueName: \"kubernetes.io/projected/0c2e253c-448f-448b-8419-b898112f632c-kube-api-access-6cgz2\") pod \"0c2e253c-448f-448b-8419-b898112f632c\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.118494 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c2e253c-448f-448b-8419-b898112f632c-config-volume\") pod \"0c2e253c-448f-448b-8419-b898112f632c\" (UID: \"0c2e253c-448f-448b-8419-b898112f632c\") " Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.119769 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c9n76"] Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.125674 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2e253c-448f-448b-8419-b898112f632c-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c2e253c-448f-448b-8419-b898112f632c" (UID: "0c2e253c-448f-448b-8419-b898112f632c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.163994 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c2e253c-448f-448b-8419-b898112f632c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c2e253c-448f-448b-8419-b898112f632c" (UID: "0c2e253c-448f-448b-8419-b898112f632c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.166526 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2e253c-448f-448b-8419-b898112f632c-kube-api-access-6cgz2" (OuterVolumeSpecName: "kube-api-access-6cgz2") pod "0c2e253c-448f-448b-8419-b898112f632c" (UID: "0c2e253c-448f-448b-8419-b898112f632c"). InnerVolumeSpecName "kube-api-access-6cgz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.211615 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.254126 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7mwxp"] Feb 18 11:51:58 crc kubenswrapper[4717]: E0218 11:51:58.254632 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2e253c-448f-448b-8419-b898112f632c" containerName="collect-profiles" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.254710 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2e253c-448f-448b-8419-b898112f632c" containerName="collect-profiles" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.254884 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2e253c-448f-448b-8419-b898112f632c" containerName="collect-profiles" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.255293 4717 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c2e253c-448f-448b-8419-b898112f632c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.255335 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cgz2\" (UniqueName: \"kubernetes.io/projected/0c2e253c-448f-448b-8419-b898112f632c-kube-api-access-6cgz2\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.255346 4717 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c2e253c-448f-448b-8419-b898112f632c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.255908 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: E0218 11:51:58.273858 4717 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod739737ee_803a_478a_a7f2_de797ffeca2a.slice/crio-ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1b37906_b51d_4abf_be9c_8607a92dfa40.slice/crio-35436349804c7ef36b7b8d6013fcc68e8f160e3ad2d94f0e2da3c83812f2d68a.scope\": RecentStats: unable to find data in memory cache]" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.324250 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mwxp"] Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.367822 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nj9hx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.390363 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.396052 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:51:58 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:51:58 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:51:58 crc kubenswrapper[4717]: healthz check failed Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.396105 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.463842 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-utilities\") pod \"redhat-marketplace-7mwxp\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.463936 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn29z\" (UniqueName: \"kubernetes.io/projected/3b07ffbf-aa95-48ef-baa5-68fb036483b7-kube-api-access-mn29z\") pod \"redhat-marketplace-7mwxp\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.463996 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-catalog-content\") pod \"redhat-marketplace-7mwxp\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.517293 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.574174 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-catalog-content\") pod \"redhat-marketplace-7mwxp\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.574275 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-utilities\") pod \"redhat-marketplace-7mwxp\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.574356 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn29z\" (UniqueName: \"kubernetes.io/projected/3b07ffbf-aa95-48ef-baa5-68fb036483b7-kube-api-access-mn29z\") pod \"redhat-marketplace-7mwxp\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.575821 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-catalog-content\") pod \"redhat-marketplace-7mwxp\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.576161 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-utilities\") pod \"redhat-marketplace-7mwxp\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.647235 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn29z\" (UniqueName: \"kubernetes.io/projected/3b07ffbf-aa95-48ef-baa5-68fb036483b7-kube-api-access-mn29z\") pod \"redhat-marketplace-7mwxp\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.681934 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6bvgx"] Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.683105 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.696449 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.703884 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6bvgx"] Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.756756 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-972tf" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.784777 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7x7z\" (UniqueName: \"kubernetes.io/projected/fb729e3d-5019-4004-876e-c5d39e77e97e-kube-api-access-k7x7z\") pod \"redhat-operators-6bvgx\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.784858 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-utilities\") pod \"redhat-operators-6bvgx\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.784953 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-catalog-content\") pod \"redhat-operators-6bvgx\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.837283 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 11:51:58 crc kubenswrapper[4717]: W0218 11:51:58.856992 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc8b7cd29_4310_4f4b_8183_7ef4fa3c5198.slice/crio-41024954ddc8cc436eba58f280be5c6f41225d70923a09e86d7f2d77386762a5 WatchSource:0}: Error finding container 41024954ddc8cc436eba58f280be5c6f41225d70923a09e86d7f2d77386762a5: Status 404 returned error can't find the container with id 41024954ddc8cc436eba58f280be5c6f41225d70923a09e86d7f2d77386762a5 Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.885554 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7x7z\" (UniqueName: \"kubernetes.io/projected/fb729e3d-5019-4004-876e-c5d39e77e97e-kube-api-access-k7x7z\") pod \"redhat-operators-6bvgx\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.885603 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-utilities\") pod \"redhat-operators-6bvgx\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.885632 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-catalog-content\") pod \"redhat-operators-6bvgx\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.885789 4717 generic.go:334] "Generic (PLEG): container finished" podID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerID="50f64f8b1792894fb97656073756851bba6dc9ff0e3576a50ff04f123a1d3293" exitCode=0 Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.885977 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2cjl" event={"ID":"154937ce-02f4-41f5-a061-fb7890e7cf40","Type":"ContainerDied","Data":"50f64f8b1792894fb97656073756851bba6dc9ff0e3576a50ff04f123a1d3293"} Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.886023 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2cjl" event={"ID":"154937ce-02f4-41f5-a061-fb7890e7cf40","Type":"ContainerStarted","Data":"f0139ced8b027db48697856698855f47734df57778fe861c0d663f343319ec32"} Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.886128 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-catalog-content\") pod \"redhat-operators-6bvgx\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.886353 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-utilities\") pod \"redhat-operators-6bvgx\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.887718 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bxl4l"] Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.897052 4717 generic.go:334] "Generic (PLEG): container finished" podID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerID="1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc" exitCode=0 Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.897165 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c9n76" event={"ID":"9250b3da-040d-4f0c-84d0-5d795bf3479d","Type":"ContainerDied","Data":"1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc"} Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.897480 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c9n76" event={"ID":"9250b3da-040d-4f0c-84d0-5d795bf3479d","Type":"ContainerStarted","Data":"22cff45d9424657d1376e59cafc1b2d18d65dd21ff353c2ee08840603bc3af13"} Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.907777 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.918376 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7x7z\" (UniqueName: \"kubernetes.io/projected/fb729e3d-5019-4004-876e-c5d39e77e97e-kube-api-access-k7x7z\") pod \"redhat-operators-6bvgx\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.918484 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" event={"ID":"0c2e253c-448f-448b-8419-b898112f632c","Type":"ContainerDied","Data":"fb27b3f80d1bbd4fde9687710c2babbb18dcbaf4224c2b102047860b3a9cc9f6"} Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.918544 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb27b3f80d1bbd4fde9687710c2babbb18dcbaf4224c2b102047860b3a9cc9f6" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.918670 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.927530 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.946381 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198","Type":"ContainerStarted","Data":"41024954ddc8cc436eba58f280be5c6f41225d70923a09e86d7f2d77386762a5"} Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.990537 4717 generic.go:334] "Generic (PLEG): container finished" podID="739737ee-803a-478a-a7f2-de797ffeca2a" containerID="ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb" exitCode=0 Feb 18 11:51:58 crc kubenswrapper[4717]: I0218 11:51:58.990604 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swdf" event={"ID":"739737ee-803a-478a-a7f2-de797ffeca2a","Type":"ContainerDied","Data":"ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb"} Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.013056 4717 generic.go:334] "Generic (PLEG): container finished" podID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerID="35436349804c7ef36b7b8d6013fcc68e8f160e3ad2d94f0e2da3c83812f2d68a" exitCode=0 Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.014305 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ksxg" event={"ID":"e1b37906-b51d-4abf-be9c-8607a92dfa40","Type":"ContainerDied","Data":"35436349804c7ef36b7b8d6013fcc68e8f160e3ad2d94f0e2da3c83812f2d68a"} Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.042527 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6fmv"] Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.074220 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.085433 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.087884 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5f7dc"] Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.096514 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.097532 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5f7dc"] Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.220782 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-utilities\") pod \"redhat-operators-5f7dc\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.222238 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmlpj\" (UniqueName: \"kubernetes.io/projected/1df54bb8-8456-4c72-8cd4-49abc687ba4d-kube-api-access-hmlpj\") pod \"redhat-operators-5f7dc\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.222338 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-catalog-content\") pod \"redhat-operators-5f7dc\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.249638 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.250641 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.254557 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.255437 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.258601 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.324015 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f84bd2-c14f-4544-affa-5515de7f8414-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f8f84bd2-c14f-4544-affa-5515de7f8414\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.324084 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-catalog-content\") pod \"redhat-operators-5f7dc\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.324169 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-utilities\") pod \"redhat-operators-5f7dc\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.324206 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8f84bd2-c14f-4544-affa-5515de7f8414-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f8f84bd2-c14f-4544-affa-5515de7f8414\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.324229 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmlpj\" (UniqueName: \"kubernetes.io/projected/1df54bb8-8456-4c72-8cd4-49abc687ba4d-kube-api-access-hmlpj\") pod \"redhat-operators-5f7dc\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.325281 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-catalog-content\") pod \"redhat-operators-5f7dc\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.325841 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-utilities\") pod \"redhat-operators-5f7dc\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.370795 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmlpj\" (UniqueName: \"kubernetes.io/projected/1df54bb8-8456-4c72-8cd4-49abc687ba4d-kube-api-access-hmlpj\") pod \"redhat-operators-5f7dc\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.401203 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mwxp"] Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.401440 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:51:59 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:51:59 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:51:59 crc kubenswrapper[4717]: healthz check failed Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.401488 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:51:59 crc kubenswrapper[4717]: W0218 11:51:59.416055 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b07ffbf_aa95_48ef_baa5_68fb036483b7.slice/crio-c144a3bcd7d650e0b88ec103f16cd97c451839e9dd016ba135406733c381cca6 WatchSource:0}: Error finding container c144a3bcd7d650e0b88ec103f16cd97c451839e9dd016ba135406733c381cca6: Status 404 returned error can't find the container with id c144a3bcd7d650e0b88ec103f16cd97c451839e9dd016ba135406733c381cca6 Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.427846 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f84bd2-c14f-4544-affa-5515de7f8414-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f8f84bd2-c14f-4544-affa-5515de7f8414\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.427979 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8f84bd2-c14f-4544-affa-5515de7f8414-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f8f84bd2-c14f-4544-affa-5515de7f8414\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.428071 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8f84bd2-c14f-4544-affa-5515de7f8414-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f8f84bd2-c14f-4544-affa-5515de7f8414\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.449418 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f84bd2-c14f-4544-affa-5515de7f8414-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f8f84bd2-c14f-4544-affa-5515de7f8414\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.471579 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.506715 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6bvgx"] Feb 18 11:51:59 crc kubenswrapper[4717]: W0218 11:51:59.511948 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb729e3d_5019_4004_876e_c5d39e77e97e.slice/crio-5d917d020a5fcf46a3b4ff6c19c3a5ac305383990c0730152af14731f93dd5f0 WatchSource:0}: Error finding container 5d917d020a5fcf46a3b4ff6c19c3a5ac305383990c0730152af14731f93dd5f0: Status 404 returned error can't find the container with id 5d917d020a5fcf46a3b4ff6c19c3a5ac305383990c0730152af14731f93dd5f0 Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.576417 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:51:59 crc kubenswrapper[4717]: I0218 11:51:59.768251 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5f7dc"] Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.013540 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.027534 4717 generic.go:334] "Generic (PLEG): container finished" podID="0ee5d22d-8884-4563-8329-c475346f3a03" containerID="68cd50bb6525a71d6977658cf31890898cf4ebc6ba614c4600a8f3ec02d32673" exitCode=0 Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.027611 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6fmv" event={"ID":"0ee5d22d-8884-4563-8329-c475346f3a03","Type":"ContainerDied","Data":"68cd50bb6525a71d6977658cf31890898cf4ebc6ba614c4600a8f3ec02d32673"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.027645 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6fmv" event={"ID":"0ee5d22d-8884-4563-8329-c475346f3a03","Type":"ContainerStarted","Data":"3f9663309aae98dbfc5580b7b071e87731d8b7a08f02a68771107e2ae3993ff3"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.034869 4717 generic.go:334] "Generic (PLEG): container finished" podID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerID="e69fc61350f0d83561ffd2832c9aaf7963da7061c08a497fe3fb8d0b95e63a46" exitCode=0 Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.035291 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bvgx" event={"ID":"fb729e3d-5019-4004-876e-c5d39e77e97e","Type":"ContainerDied","Data":"e69fc61350f0d83561ffd2832c9aaf7963da7061c08a497fe3fb8d0b95e63a46"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.035330 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bvgx" event={"ID":"fb729e3d-5019-4004-876e-c5d39e77e97e","Type":"ContainerStarted","Data":"5d917d020a5fcf46a3b4ff6c19c3a5ac305383990c0730152af14731f93dd5f0"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.061236 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198","Type":"ContainerStarted","Data":"c636b561c7a1b46b60877233f197366ed9f097f0e8ef9f4fa6e490539e0d35fc"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.075409 4717 generic.go:334] "Generic (PLEG): container finished" podID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerID="aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f" exitCode=0 Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.075605 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mwxp" event={"ID":"3b07ffbf-aa95-48ef-baa5-68fb036483b7","Type":"ContainerDied","Data":"aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.075636 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mwxp" event={"ID":"3b07ffbf-aa95-48ef-baa5-68fb036483b7","Type":"ContainerStarted","Data":"c144a3bcd7d650e0b88ec103f16cd97c451839e9dd016ba135406733c381cca6"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.102064 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.102046587 podStartE2EDuration="3.102046587s" podCreationTimestamp="2026-02-18 11:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:00.100117792 +0000 UTC m=+154.502219108" watchObservedRunningTime="2026-02-18 11:52:00.102046587 +0000 UTC m=+154.504147903" Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.111073 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" event={"ID":"d9b3201b-b5e0-4a95-9015-97309eb9957e","Type":"ContainerStarted","Data":"fd6149742754938f5c7b76833a7bb33360201d387c6fa660ca140a325908dab1"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.111153 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" event={"ID":"d9b3201b-b5e0-4a95-9015-97309eb9957e","Type":"ContainerStarted","Data":"1027d3c50c33bb3d95a7da497331e210fecdd677d5c24adb93e25b4dc411d484"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.111288 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.114471 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f7dc" event={"ID":"1df54bb8-8456-4c72-8cd4-49abc687ba4d","Type":"ContainerStarted","Data":"c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.114510 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f7dc" event={"ID":"1df54bb8-8456-4c72-8cd4-49abc687ba4d","Type":"ContainerStarted","Data":"0c9d8585de690f1821869ac9f23809374bd7a22434707ce79d796ddf0fbaa3b2"} Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.182523 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" podStartSLOduration=134.182500195 podStartE2EDuration="2m14.182500195s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:00.177645027 +0000 UTC m=+154.579746353" watchObservedRunningTime="2026-02-18 11:52:00.182500195 +0000 UTC m=+154.584601511" Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.394053 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:00 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:52:00 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:00 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:00 crc kubenswrapper[4717]: I0218 11:52:00.394941 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:01 crc kubenswrapper[4717]: I0218 11:52:01.241631 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f8f84bd2-c14f-4544-affa-5515de7f8414","Type":"ContainerStarted","Data":"f686e6b1f620ba5db29a3db38bf03d975073488b016a4439cf590618fce0584f"} Feb 18 11:52:01 crc kubenswrapper[4717]: I0218 11:52:01.241681 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f8f84bd2-c14f-4544-affa-5515de7f8414","Type":"ContainerStarted","Data":"83fd545b94f6ce6a18994116cf28822d97a228c15ef92507ffb53a075ee486c6"} Feb 18 11:52:01 crc kubenswrapper[4717]: I0218 11:52:01.257276 4717 generic.go:334] "Generic (PLEG): container finished" podID="c8b7cd29-4310-4f4b-8183-7ef4fa3c5198" containerID="c636b561c7a1b46b60877233f197366ed9f097f0e8ef9f4fa6e490539e0d35fc" exitCode=0 Feb 18 11:52:01 crc kubenswrapper[4717]: I0218 11:52:01.257386 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198","Type":"ContainerDied","Data":"c636b561c7a1b46b60877233f197366ed9f097f0e8ef9f4fa6e490539e0d35fc"} Feb 18 11:52:01 crc kubenswrapper[4717]: I0218 11:52:01.266597 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.266585028 podStartE2EDuration="2.266585028s" podCreationTimestamp="2026-02-18 11:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:01.265587729 +0000 UTC m=+155.667689045" watchObservedRunningTime="2026-02-18 11:52:01.266585028 +0000 UTC m=+155.668686344" Feb 18 11:52:01 crc kubenswrapper[4717]: I0218 11:52:01.292945 4717 generic.go:334] "Generic (PLEG): container finished" podID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerID="c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0" exitCode=0 Feb 18 11:52:01 crc kubenswrapper[4717]: I0218 11:52:01.293217 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f7dc" event={"ID":"1df54bb8-8456-4c72-8cd4-49abc687ba4d","Type":"ContainerDied","Data":"c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0"} Feb 18 11:52:01 crc kubenswrapper[4717]: I0218 11:52:01.393206 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:01 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:52:01 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:01 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:01 crc kubenswrapper[4717]: I0218 11:52:01.393499 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:02 crc kubenswrapper[4717]: I0218 11:52:02.105857 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:52:02 crc kubenswrapper[4717]: I0218 11:52:02.124161 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-z4csx" Feb 18 11:52:02 crc kubenswrapper[4717]: I0218 11:52:02.392346 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:02 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:52:02 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:02 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:02 crc kubenswrapper[4717]: I0218 11:52:02.392403 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:02 crc kubenswrapper[4717]: I0218 11:52:02.955863 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.057859 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kube-api-access\") pod \"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198\" (UID: \"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198\") " Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.057977 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kubelet-dir\") pod \"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198\" (UID: \"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198\") " Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.058374 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c8b7cd29-4310-4f4b-8183-7ef4fa3c5198" (UID: "c8b7cd29-4310-4f4b-8183-7ef4fa3c5198"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.088275 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c8b7cd29-4310-4f4b-8183-7ef4fa3c5198" (UID: "c8b7cd29-4310-4f4b-8183-7ef4fa3c5198"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.161399 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.161431 4717 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8b7cd29-4310-4f4b-8183-7ef4fa3c5198-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.337973 4717 generic.go:334] "Generic (PLEG): container finished" podID="f8f84bd2-c14f-4544-affa-5515de7f8414" containerID="f686e6b1f620ba5db29a3db38bf03d975073488b016a4439cf590618fce0584f" exitCode=0 Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.338024 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f8f84bd2-c14f-4544-affa-5515de7f8414","Type":"ContainerDied","Data":"f686e6b1f620ba5db29a3db38bf03d975073488b016a4439cf590618fce0584f"} Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.349976 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c8b7cd29-4310-4f4b-8183-7ef4fa3c5198","Type":"ContainerDied","Data":"41024954ddc8cc436eba58f280be5c6f41225d70923a09e86d7f2d77386762a5"} Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.350016 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41024954ddc8cc436eba58f280be5c6f41225d70923a09e86d7f2d77386762a5" Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.350071 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.398136 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:03 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:52:03 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:03 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.398210 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:03 crc kubenswrapper[4717]: I0218 11:52:03.544736 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zlwvm" Feb 18 11:52:04 crc kubenswrapper[4717]: I0218 11:52:04.398036 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:04 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:52:04 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:04 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:04 crc kubenswrapper[4717]: I0218 11:52:04.398100 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:04 crc kubenswrapper[4717]: I0218 11:52:04.918166 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.024281 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8f84bd2-c14f-4544-affa-5515de7f8414-kubelet-dir\") pod \"f8f84bd2-c14f-4544-affa-5515de7f8414\" (UID: \"f8f84bd2-c14f-4544-affa-5515de7f8414\") " Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.024354 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f84bd2-c14f-4544-affa-5515de7f8414-kube-api-access\") pod \"f8f84bd2-c14f-4544-affa-5515de7f8414\" (UID: \"f8f84bd2-c14f-4544-affa-5515de7f8414\") " Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.025118 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8f84bd2-c14f-4544-affa-5515de7f8414-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f8f84bd2-c14f-4544-affa-5515de7f8414" (UID: "f8f84bd2-c14f-4544-affa-5515de7f8414"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.043540 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f84bd2-c14f-4544-affa-5515de7f8414-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f8f84bd2-c14f-4544-affa-5515de7f8414" (UID: "f8f84bd2-c14f-4544-affa-5515de7f8414"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.126219 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8f84bd2-c14f-4544-affa-5515de7f8414-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.126279 4717 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8f84bd2-c14f-4544-affa-5515de7f8414-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.398910 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f8f84bd2-c14f-4544-affa-5515de7f8414","Type":"ContainerDied","Data":"83fd545b94f6ce6a18994116cf28822d97a228c15ef92507ffb53a075ee486c6"} Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.398955 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83fd545b94f6ce6a18994116cf28822d97a228c15ef92507ffb53a075ee486c6" Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.399031 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.406470 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:05 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:52:05 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:05 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:05 crc kubenswrapper[4717]: I0218 11:52:05.406530 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:06 crc kubenswrapper[4717]: I0218 11:52:06.393252 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:06 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:52:06 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:06 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:06 crc kubenswrapper[4717]: I0218 11:52:06.393639 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:07 crc kubenswrapper[4717]: I0218 11:52:07.392381 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:07 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:52:07 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:07 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:07 crc kubenswrapper[4717]: I0218 11:52:07.392445 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:07 crc kubenswrapper[4717]: I0218 11:52:07.670032 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:07 crc kubenswrapper[4717]: I0218 11:52:07.670049 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:07 crc kubenswrapper[4717]: I0218 11:52:07.670098 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:07 crc kubenswrapper[4717]: I0218 11:52:07.670110 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:07 crc kubenswrapper[4717]: I0218 11:52:07.931832 4717 patch_prober.go:28] interesting pod/console-f9d7485db-mk6cn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 18 11:52:07 crc kubenswrapper[4717]: I0218 11:52:07.932180 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mk6cn" podUID="3c042f74-11a5-46a9-bc05-6b3278428e36" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 18 11:52:08 crc kubenswrapper[4717]: I0218 11:52:08.181322 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:52:08 crc kubenswrapper[4717]: I0218 11:52:08.203759 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549f413-5b44-4fac-a21e-4f41cc30fbe6-metrics-certs\") pod \"network-metrics-daemon-gxzpl\" (UID: \"a549f413-5b44-4fac-a21e-4f41cc30fbe6\") " pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:52:08 crc kubenswrapper[4717]: I0218 11:52:08.255866 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gxzpl" Feb 18 11:52:08 crc kubenswrapper[4717]: I0218 11:52:08.394819 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:08 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:52:08 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:08 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:08 crc kubenswrapper[4717]: I0218 11:52:08.395469 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:09 crc kubenswrapper[4717]: I0218 11:52:09.403185 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:09 crc kubenswrapper[4717]: [-]has-synced failed: reason withheld Feb 18 11:52:09 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:09 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:09 crc kubenswrapper[4717]: I0218 11:52:09.403267 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:10 crc kubenswrapper[4717]: I0218 11:52:10.393797 4717 patch_prober.go:28] interesting pod/router-default-5444994796-dnv67 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 11:52:10 crc kubenswrapper[4717]: [+]has-synced ok Feb 18 11:52:10 crc kubenswrapper[4717]: [+]process-running ok Feb 18 11:52:10 crc kubenswrapper[4717]: healthz check failed Feb 18 11:52:10 crc kubenswrapper[4717]: I0218 11:52:10.393903 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dnv67" podUID="78e76a00-064d-419f-bc39-a6e0d81e3176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:52:11 crc kubenswrapper[4717]: I0218 11:52:11.392937 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:52:11 crc kubenswrapper[4717]: I0218 11:52:11.396711 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dnv67" Feb 18 11:52:12 crc kubenswrapper[4717]: I0218 11:52:12.773284 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:52:12 crc kubenswrapper[4717]: I0218 11:52:12.773709 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:52:15 crc kubenswrapper[4717]: I0218 11:52:15.425885 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-svr2x"] Feb 18 11:52:15 crc kubenswrapper[4717]: I0218 11:52:15.426095 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" podUID="23a7650f-4199-4243-8ca1-07a4d4a8c8b4" containerName="controller-manager" containerID="cri-o://9e83bdb0b2a3f3a22caf14272bd13500331da514c4bfd44cb648717114fd1333" gracePeriod=30 Feb 18 11:52:15 crc kubenswrapper[4717]: I0218 11:52:15.446472 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc"] Feb 18 11:52:15 crc kubenswrapper[4717]: I0218 11:52:15.446715 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" podUID="ff0e32a4-5a0f-4779-a1db-b67aec04f414" containerName="route-controller-manager" containerID="cri-o://bb36b837b87d3615978bc8e3d30c73c8ce6dd6a883ee8c76d1c50fa5267bfc7c" gracePeriod=30 Feb 18 11:52:16 crc kubenswrapper[4717]: I0218 11:52:16.564154 4717 generic.go:334] "Generic (PLEG): container finished" podID="ff0e32a4-5a0f-4779-a1db-b67aec04f414" containerID="bb36b837b87d3615978bc8e3d30c73c8ce6dd6a883ee8c76d1c50fa5267bfc7c" exitCode=0 Feb 18 11:52:16 crc kubenswrapper[4717]: I0218 11:52:16.564199 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" event={"ID":"ff0e32a4-5a0f-4779-a1db-b67aec04f414","Type":"ContainerDied","Data":"bb36b837b87d3615978bc8e3d30c73c8ce6dd6a883ee8c76d1c50fa5267bfc7c"} Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.185160 4717 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vkklc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.185535 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" podUID="ff0e32a4-5a0f-4779-a1db-b67aec04f414" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.669700 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.669757 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.669793 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.669807 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-s9dzs" Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.669844 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.670227 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.670297 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.670365 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c47dc9e90833c72dc9416a34e9bdcd8410c3bac15c46e6dcae04e7095f86bcc0"} pod="openshift-console/downloads-7954f5f757-s9dzs" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.670453 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" containerID="cri-o://c47dc9e90833c72dc9416a34e9bdcd8410c3bac15c46e6dcae04e7095f86bcc0" gracePeriod=2 Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.919356 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.985109 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:52:17 crc kubenswrapper[4717]: I0218 11:52:17.989835 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 11:52:18 crc kubenswrapper[4717]: I0218 11:52:18.209937 4717 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-svr2x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:52:18 crc kubenswrapper[4717]: I0218 11:52:18.210009 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" podUID="23a7650f-4199-4243-8ca1-07a4d4a8c8b4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 11:52:21 crc kubenswrapper[4717]: I0218 11:52:21.592219 4717 generic.go:334] "Generic (PLEG): container finished" podID="0e8734b5-4294-4091-b377-680aa4178a19" containerID="c47dc9e90833c72dc9416a34e9bdcd8410c3bac15c46e6dcae04e7095f86bcc0" exitCode=0 Feb 18 11:52:21 crc kubenswrapper[4717]: I0218 11:52:21.592361 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s9dzs" event={"ID":"0e8734b5-4294-4091-b377-680aa4178a19","Type":"ContainerDied","Data":"c47dc9e90833c72dc9416a34e9bdcd8410c3bac15c46e6dcae04e7095f86bcc0"} Feb 18 11:52:21 crc kubenswrapper[4717]: I0218 11:52:21.594209 4717 generic.go:334] "Generic (PLEG): container finished" podID="23a7650f-4199-4243-8ca1-07a4d4a8c8b4" containerID="9e83bdb0b2a3f3a22caf14272bd13500331da514c4bfd44cb648717114fd1333" exitCode=0 Feb 18 11:52:21 crc kubenswrapper[4717]: I0218 11:52:21.594228 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" event={"ID":"23a7650f-4199-4243-8ca1-07a4d4a8c8b4","Type":"ContainerDied","Data":"9e83bdb0b2a3f3a22caf14272bd13500331da514c4bfd44cb648717114fd1333"} Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.671121 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.671719 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.933486 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.943252 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.988975 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b4698c5-s5dg5"] Feb 18 11:52:27 crc kubenswrapper[4717]: E0218 11:52:27.989295 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0e32a4-5a0f-4779-a1db-b67aec04f414" containerName="route-controller-manager" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.989307 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0e32a4-5a0f-4779-a1db-b67aec04f414" containerName="route-controller-manager" Feb 18 11:52:27 crc kubenswrapper[4717]: E0218 11:52:27.989321 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f84bd2-c14f-4544-affa-5515de7f8414" containerName="pruner" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.989327 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f84bd2-c14f-4544-affa-5515de7f8414" containerName="pruner" Feb 18 11:52:27 crc kubenswrapper[4717]: E0218 11:52:27.989363 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a7650f-4199-4243-8ca1-07a4d4a8c8b4" containerName="controller-manager" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.989371 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a7650f-4199-4243-8ca1-07a4d4a8c8b4" containerName="controller-manager" Feb 18 11:52:27 crc kubenswrapper[4717]: E0218 11:52:27.989382 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b7cd29-4310-4f4b-8183-7ef4fa3c5198" containerName="pruner" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.989387 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b7cd29-4310-4f4b-8183-7ef4fa3c5198" containerName="pruner" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.989535 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b7cd29-4310-4f4b-8183-7ef4fa3c5198" containerName="pruner" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.989551 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0e32a4-5a0f-4779-a1db-b67aec04f414" containerName="route-controller-manager" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.989561 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a7650f-4199-4243-8ca1-07a4d4a8c8b4" containerName="controller-manager" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.989568 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f84bd2-c14f-4544-affa-5515de7f8414" containerName="pruner" Feb 18 11:52:27 crc kubenswrapper[4717]: I0218 11:52:27.990016 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.003057 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b4698c5-s5dg5"] Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.129406 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-config\") pod \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.129494 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-proxy-ca-bundles\") pod \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.129540 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn75r\" (UniqueName: \"kubernetes.io/projected/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-kube-api-access-kn75r\") pod \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.129579 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s96p\" (UniqueName: \"kubernetes.io/projected/ff0e32a4-5a0f-4779-a1db-b67aec04f414-kube-api-access-8s96p\") pod \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.129617 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-serving-cert\") pod \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.129665 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-client-ca\") pod \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.129710 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0e32a4-5a0f-4779-a1db-b67aec04f414-serving-cert\") pod \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.129738 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-client-ca\") pod \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\" (UID: \"ff0e32a4-5a0f-4779-a1db-b67aec04f414\") " Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.129777 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-config\") pod \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\" (UID: \"23a7650f-4199-4243-8ca1-07a4d4a8c8b4\") " Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.129939 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-client-ca\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.130012 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-proxy-ca-bundles\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.130056 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0682e72a-30fc-44d7-a711-8c6ca19a277c-serving-cert\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.130102 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-config\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.130134 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqzv\" (UniqueName: \"kubernetes.io/projected/0682e72a-30fc-44d7-a711-8c6ca19a277c-kube-api-access-rfqzv\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.130186 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-config" (OuterVolumeSpecName: "config") pod "ff0e32a4-5a0f-4779-a1db-b67aec04f414" (UID: "ff0e32a4-5a0f-4779-a1db-b67aec04f414"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.130983 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "23a7650f-4199-4243-8ca1-07a4d4a8c8b4" (UID: "23a7650f-4199-4243-8ca1-07a4d4a8c8b4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.131856 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "23a7650f-4199-4243-8ca1-07a4d4a8c8b4" (UID: "23a7650f-4199-4243-8ca1-07a4d4a8c8b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.132498 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff0e32a4-5a0f-4779-a1db-b67aec04f414" (UID: "ff0e32a4-5a0f-4779-a1db-b67aec04f414"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.133183 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-config" (OuterVolumeSpecName: "config") pod "23a7650f-4199-4243-8ca1-07a4d4a8c8b4" (UID: "23a7650f-4199-4243-8ca1-07a4d4a8c8b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.138753 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0e32a4-5a0f-4779-a1db-b67aec04f414-kube-api-access-8s96p" (OuterVolumeSpecName: "kube-api-access-8s96p") pod "ff0e32a4-5a0f-4779-a1db-b67aec04f414" (UID: "ff0e32a4-5a0f-4779-a1db-b67aec04f414"). InnerVolumeSpecName "kube-api-access-8s96p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.142718 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-kube-api-access-kn75r" (OuterVolumeSpecName: "kube-api-access-kn75r") pod "23a7650f-4199-4243-8ca1-07a4d4a8c8b4" (UID: "23a7650f-4199-4243-8ca1-07a4d4a8c8b4"). InnerVolumeSpecName "kube-api-access-kn75r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.142778 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0e32a4-5a0f-4779-a1db-b67aec04f414-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff0e32a4-5a0f-4779-a1db-b67aec04f414" (UID: "ff0e32a4-5a0f-4779-a1db-b67aec04f414"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.142809 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "23a7650f-4199-4243-8ca1-07a4d4a8c8b4" (UID: "23a7650f-4199-4243-8ca1-07a4d4a8c8b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.185705 4717 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vkklc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.185776 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" podUID="ff0e32a4-5a0f-4779-a1db-b67aec04f414" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.210186 4717 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-svr2x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.210277 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" podUID="23a7650f-4199-4243-8ca1-07a4d4a8c8b4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.231205 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-proxy-ca-bundles\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.231317 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0682e72a-30fc-44d7-a711-8c6ca19a277c-serving-cert\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.231363 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-config\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.231388 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqzv\" (UniqueName: \"kubernetes.io/projected/0682e72a-30fc-44d7-a711-8c6ca19a277c-kube-api-access-rfqzv\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.232855 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-proxy-ca-bundles\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.232953 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-config\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.232570 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-client-ca\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.233802 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s96p\" (UniqueName: \"kubernetes.io/projected/ff0e32a4-5a0f-4779-a1db-b67aec04f414-kube-api-access-8s96p\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.234151 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.234172 4717 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.234185 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff0e32a4-5a0f-4779-a1db-b67aec04f414-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.234195 4717 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.235229 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.235249 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff0e32a4-5a0f-4779-a1db-b67aec04f414-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.235309 4717 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.235322 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn75r\" (UniqueName: \"kubernetes.io/projected/23a7650f-4199-4243-8ca1-07a4d4a8c8b4-kube-api-access-kn75r\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.234227 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-client-ca\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.238686 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0682e72a-30fc-44d7-a711-8c6ca19a277c-serving-cert\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.247531 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqzv\" (UniqueName: \"kubernetes.io/projected/0682e72a-30fc-44d7-a711-8c6ca19a277c-kube-api-access-rfqzv\") pod \"controller-manager-b4698c5-s5dg5\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.326816 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.428335 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vw62k" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.645715 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.646159 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-svr2x" event={"ID":"23a7650f-4199-4243-8ca1-07a4d4a8c8b4","Type":"ContainerDied","Data":"2bea45deb9a5e375f5efbaee3241321a9db9739a9d0a8dae94ffc620c8ca520f"} Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.646218 4717 scope.go:117] "RemoveContainer" containerID="9e83bdb0b2a3f3a22caf14272bd13500331da514c4bfd44cb648717114fd1333" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.654475 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" event={"ID":"ff0e32a4-5a0f-4779-a1db-b67aec04f414","Type":"ContainerDied","Data":"194c1f5c409f41486ebe5b73b1f3282ce948bc7a22e5019817ffd8cb6c2e32a3"} Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.655143 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc" Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.683989 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-svr2x"] Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.691754 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-svr2x"] Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.694817 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc"] Feb 18 11:52:28 crc kubenswrapper[4717]: I0218 11:52:28.697148 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vkklc"] Feb 18 11:52:29 crc kubenswrapper[4717]: I0218 11:52:29.043988 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a7650f-4199-4243-8ca1-07a4d4a8c8b4" path="/var/lib/kubelet/pods/23a7650f-4199-4243-8ca1-07a4d4a8c8b4/volumes" Feb 18 11:52:29 crc kubenswrapper[4717]: I0218 11:52:29.044746 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0e32a4-5a0f-4779-a1db-b67aec04f414" path="/var/lib/kubelet/pods/ff0e32a4-5a0f-4779-a1db-b67aec04f414/volumes" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.317713 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8"] Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.318888 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.321498 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.322047 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.322200 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.322343 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.322362 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.323424 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.346962 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8"] Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.471111 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-client-ca\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.471723 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9842q\" (UniqueName: \"kubernetes.io/projected/5e837eb9-1e61-437f-8416-50fd9e041228-kube-api-access-9842q\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.471780 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-config\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.471834 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e837eb9-1e61-437f-8416-50fd9e041228-serving-cert\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.573007 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9842q\" (UniqueName: \"kubernetes.io/projected/5e837eb9-1e61-437f-8416-50fd9e041228-kube-api-access-9842q\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.573071 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-config\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.573109 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e837eb9-1e61-437f-8416-50fd9e041228-serving-cert\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.573155 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-client-ca\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.574479 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-client-ca\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.574636 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-config\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.583202 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e837eb9-1e61-437f-8416-50fd9e041228-serving-cert\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.592718 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9842q\" (UniqueName: \"kubernetes.io/projected/5e837eb9-1e61-437f-8416-50fd9e041228-kube-api-access-9842q\") pod \"route-controller-manager-588997d685-wtwj8\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:30 crc kubenswrapper[4717]: I0218 11:52:30.654442 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:34 crc kubenswrapper[4717]: E0218 11:52:34.356643 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 11:52:34 crc kubenswrapper[4717]: E0218 11:52:34.357215 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pthzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-n6fmv_openshift-marketplace(0ee5d22d-8884-4563-8329-c475346f3a03): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:52:34 crc kubenswrapper[4717]: E0218 11:52:34.358460 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-n6fmv" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" Feb 18 11:52:35 crc kubenswrapper[4717]: I0218 11:52:35.132905 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4698c5-s5dg5"] Feb 18 11:52:35 crc kubenswrapper[4717]: I0218 11:52:35.164613 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 11:52:35 crc kubenswrapper[4717]: I0218 11:52:35.231807 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8"] Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.641851 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.643001 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.647177 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.647567 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.650219 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rzqs7"] Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.655877 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.761598 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.761653 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.863791 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.863970 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.864100 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.899618 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:52:36 crc kubenswrapper[4717]: I0218 11:52:36.963349 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:52:37 crc kubenswrapper[4717]: I0218 11:52:37.671372 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:37 crc kubenswrapper[4717]: I0218 11:52:37.671720 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.543511 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-n6fmv" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" Feb 18 11:52:39 crc kubenswrapper[4717]: I0218 11:52:39.593987 4717 scope.go:117] "RemoveContainer" containerID="bb36b837b87d3615978bc8e3d30c73c8ce6dd6a883ee8c76d1c50fa5267bfc7c" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.641933 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.642090 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn29z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7mwxp_openshift-marketplace(3b07ffbf-aa95-48ef-baa5-68fb036483b7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.643311 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7mwxp" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.662146 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.662438 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmlpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5f7dc_openshift-marketplace(1df54bb8-8456-4c72-8cd4-49abc687ba4d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.665773 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5f7dc" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.682753 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.682941 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8r4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4swdf_openshift-marketplace(739737ee-803a-478a-a7f2-de797ffeca2a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.684010 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4swdf" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.714241 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.714425 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7x7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6bvgx_openshift-marketplace(fb729e3d-5019-4004-876e-c5d39e77e97e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.715736 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6bvgx" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.751812 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-7mwxp" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.752062 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5f7dc" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" Feb 18 11:52:39 crc kubenswrapper[4717]: E0218 11:52:39.760746 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4swdf" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.133439 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8"] Feb 18 11:52:40 crc kubenswrapper[4717]: W0218 11:52:40.145820 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e837eb9_1e61_437f_8416_50fd9e041228.slice/crio-df6e7d2857ba12c2297103dab8cef5cbbfee587533155edde4a58e558a2dc3bb WatchSource:0}: Error finding container df6e7d2857ba12c2297103dab8cef5cbbfee587533155edde4a58e558a2dc3bb: Status 404 returned error can't find the container with id df6e7d2857ba12c2297103dab8cef5cbbfee587533155edde4a58e558a2dc3bb Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.181844 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gxzpl"] Feb 18 11:52:40 crc kubenswrapper[4717]: W0218 11:52:40.223678 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda549f413_5b44_4fac_a21e_4f41cc30fbe6.slice/crio-e4d665ffe0456b66c7f8d2551b5d5ddf9ac3c0ba3fcf970327b053e60041b535 WatchSource:0}: Error finding container e4d665ffe0456b66c7f8d2551b5d5ddf9ac3c0ba3fcf970327b053e60041b535: Status 404 returned error can't find the container with id e4d665ffe0456b66c7f8d2551b5d5ddf9ac3c0ba3fcf970327b053e60041b535 Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.236519 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 11:52:40 crc kubenswrapper[4717]: W0218 11:52:40.258237 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3c5e4f2c_84e3_4cd9_a501_71634a4e2ccc.slice/crio-44a54215f819218e26a498bfe9485e43550bbcb2f5ef2119e7753601d7bcbebf WatchSource:0}: Error finding container 44a54215f819218e26a498bfe9485e43550bbcb2f5ef2119e7753601d7bcbebf: Status 404 returned error can't find the container with id 44a54215f819218e26a498bfe9485e43550bbcb2f5ef2119e7753601d7bcbebf Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.265483 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4698c5-s5dg5"] Feb 18 11:52:40 crc kubenswrapper[4717]: W0218 11:52:40.290035 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0682e72a_30fc_44d7_a711_8c6ca19a277c.slice/crio-2c0561e68caa403ed25ec2ec6b0b574c157063040b59b8398334e8345060c9d8 WatchSource:0}: Error finding container 2c0561e68caa403ed25ec2ec6b0b574c157063040b59b8398334e8345060c9d8: Status 404 returned error can't find the container with id 2c0561e68caa403ed25ec2ec6b0b574c157063040b59b8398334e8345060c9d8 Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.745128 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s9dzs" event={"ID":"0e8734b5-4294-4091-b377-680aa4178a19","Type":"ContainerStarted","Data":"c5e963d1bcef84244f514959c7ba98f160210c30141a214f03b9eef9b6a89b65"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.746831 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-s9dzs" Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.746903 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.746971 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.750188 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" event={"ID":"a549f413-5b44-4fac-a21e-4f41cc30fbe6","Type":"ContainerStarted","Data":"a7b54baed7a3c6e4b652771f33eb6c4cf2da0a592025ca708fba1d8d478af17a"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.750224 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" event={"ID":"a549f413-5b44-4fac-a21e-4f41cc30fbe6","Type":"ContainerStarted","Data":"e4d665ffe0456b66c7f8d2551b5d5ddf9ac3c0ba3fcf970327b053e60041b535"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.756708 4717 generic.go:334] "Generic (PLEG): container finished" podID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerID="cd117612dc1bfcd594f6a57ca1a8832237a12941274a8ca9b811a8cb5b22ae37" exitCode=0 Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.756876 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ksxg" event={"ID":"e1b37906-b51d-4abf-be9c-8607a92dfa40","Type":"ContainerDied","Data":"cd117612dc1bfcd594f6a57ca1a8832237a12941274a8ca9b811a8cb5b22ae37"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.761157 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" event={"ID":"0682e72a-30fc-44d7-a711-8c6ca19a277c","Type":"ContainerStarted","Data":"725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.761426 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" event={"ID":"0682e72a-30fc-44d7-a711-8c6ca19a277c","Type":"ContainerStarted","Data":"2c0561e68caa403ed25ec2ec6b0b574c157063040b59b8398334e8345060c9d8"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.761937 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" podUID="0682e72a-30fc-44d7-a711-8c6ca19a277c" containerName="controller-manager" containerID="cri-o://725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36" gracePeriod=30 Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.762513 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.771154 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" event={"ID":"5e837eb9-1e61-437f-8416-50fd9e041228","Type":"ContainerStarted","Data":"bcb711ecc7adc2fb3468405989a28a4700ae224d8b02fc2ff42151dac40d6b6a"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.771202 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" event={"ID":"5e837eb9-1e61-437f-8416-50fd9e041228","Type":"ContainerStarted","Data":"df6e7d2857ba12c2297103dab8cef5cbbfee587533155edde4a58e558a2dc3bb"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.774774 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" podUID="5e837eb9-1e61-437f-8416-50fd9e041228" containerName="route-controller-manager" containerID="cri-o://bcb711ecc7adc2fb3468405989a28a4700ae224d8b02fc2ff42151dac40d6b6a" gracePeriod=30 Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.774967 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.782927 4717 patch_prober.go:28] interesting pod/controller-manager-b4698c5-s5dg5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:56562->10.217.0.54:8443: read: connection reset by peer" start-of-body= Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.783327 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" podUID="0682e72a-30fc-44d7-a711-8c6ca19a277c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:56562->10.217.0.54:8443: read: connection reset by peer" Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.787955 4717 generic.go:334] "Generic (PLEG): container finished" podID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerID="7f43ca82909b91477189510d747363fa77ee915ac3644cb504cb45f989ef0bdf" exitCode=0 Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.788048 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2cjl" event={"ID":"154937ce-02f4-41f5-a061-fb7890e7cf40","Type":"ContainerDied","Data":"7f43ca82909b91477189510d747363fa77ee915ac3644cb504cb45f989ef0bdf"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.796679 4717 generic.go:334] "Generic (PLEG): container finished" podID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerID="ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811" exitCode=0 Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.796764 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c9n76" event={"ID":"9250b3da-040d-4f0c-84d0-5d795bf3479d","Type":"ContainerDied","Data":"ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.803368 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc","Type":"ContainerStarted","Data":"d9aa2405afbe13a52ada53d5e8d762ea974a578d091e9d88ea78d7ca88ade0bd"} Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.803404 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc","Type":"ContainerStarted","Data":"44a54215f819218e26a498bfe9485e43550bbcb2f5ef2119e7753601d7bcbebf"} Feb 18 11:52:40 crc kubenswrapper[4717]: E0218 11:52:40.803655 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6bvgx" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.810882 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" podStartSLOduration=25.810863155 podStartE2EDuration="25.810863155s" podCreationTimestamp="2026-02-18 11:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:40.808294071 +0000 UTC m=+195.210395387" watchObservedRunningTime="2026-02-18 11:52:40.810863155 +0000 UTC m=+195.212964461" Feb 18 11:52:40 crc kubenswrapper[4717]: I0218 11:52:40.884885 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.884864599 podStartE2EDuration="4.884864599s" podCreationTimestamp="2026-02-18 11:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:40.8658898 +0000 UTC m=+195.267991126" watchObservedRunningTime="2026-02-18 11:52:40.884864599 +0000 UTC m=+195.286965915" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.038808 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" podStartSLOduration=26.038791927 podStartE2EDuration="26.038791927s" podCreationTimestamp="2026-02-18 11:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:40.911763154 +0000 UTC m=+195.313864470" watchObservedRunningTime="2026-02-18 11:52:41.038791927 +0000 UTC m=+195.440893243" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.046709 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.047605 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.058919 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.087737 4717 patch_prober.go:28] interesting pod/route-controller-manager-588997d685-wtwj8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:35828->10.217.0.55:8443: read: connection reset by peer" start-of-body= Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.087795 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" podUID="5e837eb9-1e61-437f-8416-50fd9e041228" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:35828->10.217.0.55:8443: read: connection reset by peer" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.128300 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-var-lock\") pod \"installer-9-crc\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.128368 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.128398 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kube-api-access\") pod \"installer-9-crc\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.231512 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-var-lock\") pod \"installer-9-crc\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.231580 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.231605 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kube-api-access\") pod \"installer-9-crc\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.231756 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.231804 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-var-lock\") pod \"installer-9-crc\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.255083 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kube-api-access\") pod \"installer-9-crc\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.292167 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.320414 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6698fbf897-82z8t"] Feb 18 11:52:41 crc kubenswrapper[4717]: E0218 11:52:41.320907 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0682e72a-30fc-44d7-a711-8c6ca19a277c" containerName="controller-manager" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.320997 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0682e72a-30fc-44d7-a711-8c6ca19a277c" containerName="controller-manager" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.321223 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0682e72a-30fc-44d7-a711-8c6ca19a277c" containerName="controller-manager" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.321999 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.332838 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6698fbf897-82z8t"] Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.333192 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-config\") pod \"0682e72a-30fc-44d7-a711-8c6ca19a277c\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.333230 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-proxy-ca-bundles\") pod \"0682e72a-30fc-44d7-a711-8c6ca19a277c\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.333309 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfqzv\" (UniqueName: \"kubernetes.io/projected/0682e72a-30fc-44d7-a711-8c6ca19a277c-kube-api-access-rfqzv\") pod \"0682e72a-30fc-44d7-a711-8c6ca19a277c\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.333356 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-client-ca\") pod \"0682e72a-30fc-44d7-a711-8c6ca19a277c\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.333382 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0682e72a-30fc-44d7-a711-8c6ca19a277c-serving-cert\") pod \"0682e72a-30fc-44d7-a711-8c6ca19a277c\" (UID: \"0682e72a-30fc-44d7-a711-8c6ca19a277c\") " Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.334540 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdsd4\" (UniqueName: \"kubernetes.io/projected/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-kube-api-access-bdsd4\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.334594 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-serving-cert\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.334630 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-proxy-ca-bundles\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.334649 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-client-ca\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.334681 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-config\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.334763 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0682e72a-30fc-44d7-a711-8c6ca19a277c" (UID: "0682e72a-30fc-44d7-a711-8c6ca19a277c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.334905 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-client-ca" (OuterVolumeSpecName: "client-ca") pod "0682e72a-30fc-44d7-a711-8c6ca19a277c" (UID: "0682e72a-30fc-44d7-a711-8c6ca19a277c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.335114 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-config" (OuterVolumeSpecName: "config") pod "0682e72a-30fc-44d7-a711-8c6ca19a277c" (UID: "0682e72a-30fc-44d7-a711-8c6ca19a277c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.341809 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0682e72a-30fc-44d7-a711-8c6ca19a277c-kube-api-access-rfqzv" (OuterVolumeSpecName: "kube-api-access-rfqzv") pod "0682e72a-30fc-44d7-a711-8c6ca19a277c" (UID: "0682e72a-30fc-44d7-a711-8c6ca19a277c"). InnerVolumeSpecName "kube-api-access-rfqzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.342029 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0682e72a-30fc-44d7-a711-8c6ca19a277c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0682e72a-30fc-44d7-a711-8c6ca19a277c" (UID: "0682e72a-30fc-44d7-a711-8c6ca19a277c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.365444 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.435818 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdsd4\" (UniqueName: \"kubernetes.io/projected/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-kube-api-access-bdsd4\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.435903 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-serving-cert\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.435947 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-proxy-ca-bundles\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.435974 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-client-ca\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.436015 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-config\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.436054 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfqzv\" (UniqueName: \"kubernetes.io/projected/0682e72a-30fc-44d7-a711-8c6ca19a277c-kube-api-access-rfqzv\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.436067 4717 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.436078 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0682e72a-30fc-44d7-a711-8c6ca19a277c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.436089 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.436100 4717 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0682e72a-30fc-44d7-a711-8c6ca19a277c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.437378 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-client-ca\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.437666 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-proxy-ca-bundles\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.437909 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-config\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.440341 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-serving-cert\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.456059 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdsd4\" (UniqueName: \"kubernetes.io/projected/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-kube-api-access-bdsd4\") pod \"controller-manager-6698fbf897-82z8t\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.652517 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.816217 4717 generic.go:334] "Generic (PLEG): container finished" podID="0682e72a-30fc-44d7-a711-8c6ca19a277c" containerID="725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36" exitCode=0 Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.816379 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" event={"ID":"0682e72a-30fc-44d7-a711-8c6ca19a277c","Type":"ContainerDied","Data":"725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36"} Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.816416 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" event={"ID":"0682e72a-30fc-44d7-a711-8c6ca19a277c","Type":"ContainerDied","Data":"2c0561e68caa403ed25ec2ec6b0b574c157063040b59b8398334e8345060c9d8"} Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.816456 4717 scope.go:117] "RemoveContainer" containerID="725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.816477 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4698c5-s5dg5" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.817543 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.822922 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-588997d685-wtwj8_5e837eb9-1e61-437f-8416-50fd9e041228/route-controller-manager/0.log" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.822962 4717 generic.go:334] "Generic (PLEG): container finished" podID="5e837eb9-1e61-437f-8416-50fd9e041228" containerID="bcb711ecc7adc2fb3468405989a28a4700ae224d8b02fc2ff42151dac40d6b6a" exitCode=255 Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.823021 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" event={"ID":"5e837eb9-1e61-437f-8416-50fd9e041228","Type":"ContainerDied","Data":"bcb711ecc7adc2fb3468405989a28a4700ae224d8b02fc2ff42151dac40d6b6a"} Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.824933 4717 generic.go:334] "Generic (PLEG): container finished" podID="3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc" containerID="d9aa2405afbe13a52ada53d5e8d762ea974a578d091e9d88ea78d7ca88ade0bd" exitCode=0 Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.825002 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc","Type":"ContainerDied","Data":"d9aa2405afbe13a52ada53d5e8d762ea974a578d091e9d88ea78d7ca88ade0bd"} Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.826184 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gxzpl" event={"ID":"a549f413-5b44-4fac-a21e-4f41cc30fbe6","Type":"ContainerStarted","Data":"2abd81747993324efdc8b66e639a52d05cb7a8f36a4de7d1967cd63320a0dcd6"} Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.827108 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.827288 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.861776 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4698c5-s5dg5"] Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.872921 4717 scope.go:117] "RemoveContainer" containerID="725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36" Feb 18 11:52:41 crc kubenswrapper[4717]: E0218 11:52:41.878913 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36\": container with ID starting with 725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36 not found: ID does not exist" containerID="725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.878955 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36"} err="failed to get container status \"725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36\": rpc error: code = NotFound desc = could not find container \"725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36\": container with ID starting with 725a54318efbd15b8c9820132977b2fa55e9ea997ccbf774a6e6a871a3086a36 not found: ID does not exist" Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.880386 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b4698c5-s5dg5"] Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.880439 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6698fbf897-82z8t"] Feb 18 11:52:41 crc kubenswrapper[4717]: I0218 11:52:41.885867 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gxzpl" podStartSLOduration=175.885850818 podStartE2EDuration="2m55.885850818s" podCreationTimestamp="2026-02-18 11:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:41.884332065 +0000 UTC m=+196.286433381" watchObservedRunningTime="2026-02-18 11:52:41.885850818 +0000 UTC m=+196.287952134" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.151807 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-588997d685-wtwj8_5e837eb9-1e61-437f-8416-50fd9e041228/route-controller-manager/0.log" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.152110 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.248850 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-client-ca\") pod \"5e837eb9-1e61-437f-8416-50fd9e041228\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.248916 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e837eb9-1e61-437f-8416-50fd9e041228-serving-cert\") pod \"5e837eb9-1e61-437f-8416-50fd9e041228\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.248941 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-config\") pod \"5e837eb9-1e61-437f-8416-50fd9e041228\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.248973 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9842q\" (UniqueName: \"kubernetes.io/projected/5e837eb9-1e61-437f-8416-50fd9e041228-kube-api-access-9842q\") pod \"5e837eb9-1e61-437f-8416-50fd9e041228\" (UID: \"5e837eb9-1e61-437f-8416-50fd9e041228\") " Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.250766 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-config" (OuterVolumeSpecName: "config") pod "5e837eb9-1e61-437f-8416-50fd9e041228" (UID: "5e837eb9-1e61-437f-8416-50fd9e041228"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.251849 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e837eb9-1e61-437f-8416-50fd9e041228" (UID: "5e837eb9-1e61-437f-8416-50fd9e041228"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.254401 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e837eb9-1e61-437f-8416-50fd9e041228-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e837eb9-1e61-437f-8416-50fd9e041228" (UID: "5e837eb9-1e61-437f-8416-50fd9e041228"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.254515 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e837eb9-1e61-437f-8416-50fd9e041228-kube-api-access-9842q" (OuterVolumeSpecName: "kube-api-access-9842q") pod "5e837eb9-1e61-437f-8416-50fd9e041228" (UID: "5e837eb9-1e61-437f-8416-50fd9e041228"). InnerVolumeSpecName "kube-api-access-9842q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.350551 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9842q\" (UniqueName: \"kubernetes.io/projected/5e837eb9-1e61-437f-8416-50fd9e041228-kube-api-access-9842q\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.350589 4717 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.350627 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e837eb9-1e61-437f-8416-50fd9e041228-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.350640 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e837eb9-1e61-437f-8416-50fd9e041228-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.772934 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.773237 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.836553 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-588997d685-wtwj8_5e837eb9-1e61-437f-8416-50fd9e041228/route-controller-manager/0.log" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.836650 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" event={"ID":"5e837eb9-1e61-437f-8416-50fd9e041228","Type":"ContainerDied","Data":"df6e7d2857ba12c2297103dab8cef5cbbfee587533155edde4a58e558a2dc3bb"} Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.836694 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.836694 4717 scope.go:117] "RemoveContainer" containerID="bcb711ecc7adc2fb3468405989a28a4700ae224d8b02fc2ff42151dac40d6b6a" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.838681 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d","Type":"ContainerStarted","Data":"64584b732ebc084ec0122736343ed8b212307c5c93ff49daa40db5862540e36e"} Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.838777 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d","Type":"ContainerStarted","Data":"eb5de7f26e7ad32ec09f350c027db321950850ce07a3aad606392be30d48d3ea"} Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.840381 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" event={"ID":"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d","Type":"ContainerStarted","Data":"24a81c149258312a76c44e74b0043a68f8a4a28fcacb1c75c00f9dfdae9a7281"} Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.840463 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.840477 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" event={"ID":"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d","Type":"ContainerStarted","Data":"d50e381e91830043431c96ed2964e799bbcdc06f3bede6e718049d963fa47aa5"} Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.843063 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.843126 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.863819 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.873042 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" podStartSLOduration=7.873021058 podStartE2EDuration="7.873021058s" podCreationTimestamp="2026-02-18 11:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:42.862102111 +0000 UTC m=+197.264203427" watchObservedRunningTime="2026-02-18 11:52:42.873021058 +0000 UTC m=+197.275122374" Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.879376 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8"] Feb 18 11:52:42 crc kubenswrapper[4717]: I0218 11:52:42.882390 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588997d685-wtwj8"] Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.057118 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0682e72a-30fc-44d7-a711-8c6ca19a277c" path="/var/lib/kubelet/pods/0682e72a-30fc-44d7-a711-8c6ca19a277c/volumes" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.057746 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e837eb9-1e61-437f-8416-50fd9e041228" path="/var/lib/kubelet/pods/5e837eb9-1e61-437f-8416-50fd9e041228/volumes" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.226990 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.268734 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kube-api-access\") pod \"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc\" (UID: \"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc\") " Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.269195 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kubelet-dir\") pod \"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc\" (UID: \"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc\") " Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.269580 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc" (UID: "3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.275139 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc" (UID: "3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.321822 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf"] Feb 18 11:52:43 crc kubenswrapper[4717]: E0218 11:52:43.322050 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e837eb9-1e61-437f-8416-50fd9e041228" containerName="route-controller-manager" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.322069 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e837eb9-1e61-437f-8416-50fd9e041228" containerName="route-controller-manager" Feb 18 11:52:43 crc kubenswrapper[4717]: E0218 11:52:43.322086 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc" containerName="pruner" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.322093 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc" containerName="pruner" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.322200 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc" containerName="pruner" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.322214 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e837eb9-1e61-437f-8416-50fd9e041228" containerName="route-controller-manager" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.322644 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.325117 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.326731 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.326954 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.327186 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.327212 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.327334 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.342423 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf"] Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.371231 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xdx6\" (UniqueName: \"kubernetes.io/projected/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-kube-api-access-9xdx6\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.371328 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-serving-cert\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.371424 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-config\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.371454 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-client-ca\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.371531 4717 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.371545 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.472489 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-config\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.472570 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-client-ca\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.472602 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xdx6\" (UniqueName: \"kubernetes.io/projected/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-kube-api-access-9xdx6\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.472649 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-serving-cert\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.473778 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-config\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.474846 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-client-ca\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.476953 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-serving-cert\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.489393 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xdx6\" (UniqueName: \"kubernetes.io/projected/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-kube-api-access-9xdx6\") pod \"route-controller-manager-69cf96d799-cxngf\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.651779 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.850530 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3c5e4f2c-84e3-4cd9-a501-71634a4e2ccc","Type":"ContainerDied","Data":"44a54215f819218e26a498bfe9485e43550bbcb2f5ef2119e7753601d7bcbebf"} Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.850567 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44a54215f819218e26a498bfe9485e43550bbcb2f5ef2119e7753601d7bcbebf" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.850948 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 11:52:43 crc kubenswrapper[4717]: I0218 11:52:43.869252 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.869231498 podStartE2EDuration="2.869231498s" podCreationTimestamp="2026-02-18 11:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:43.863236463 +0000 UTC m=+198.265337779" watchObservedRunningTime="2026-02-18 11:52:43.869231498 +0000 UTC m=+198.271332814" Feb 18 11:52:44 crc kubenswrapper[4717]: I0218 11:52:44.273684 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf"] Feb 18 11:52:44 crc kubenswrapper[4717]: W0218 11:52:44.282185 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1ad78b6_b637_4a43_9391_1f5821d4b8bc.slice/crio-e45e8b6ad3545c8b72ca1272be1f4461c0a631234bc49c80ae960024b573876d WatchSource:0}: Error finding container e45e8b6ad3545c8b72ca1272be1f4461c0a631234bc49c80ae960024b573876d: Status 404 returned error can't find the container with id e45e8b6ad3545c8b72ca1272be1f4461c0a631234bc49c80ae960024b573876d Feb 18 11:52:44 crc kubenswrapper[4717]: I0218 11:52:44.859363 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2cjl" event={"ID":"154937ce-02f4-41f5-a061-fb7890e7cf40","Type":"ContainerStarted","Data":"4629c5e551491c5a2875cbf0afc3a5146c1f51da2c18009fea2cfa83e58a9deb"} Feb 18 11:52:44 crc kubenswrapper[4717]: I0218 11:52:44.864836 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" event={"ID":"e1ad78b6-b637-4a43-9391-1f5821d4b8bc","Type":"ContainerStarted","Data":"e45e8b6ad3545c8b72ca1272be1f4461c0a631234bc49c80ae960024b573876d"} Feb 18 11:52:44 crc kubenswrapper[4717]: I0218 11:52:44.885967 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m2cjl" podStartSLOduration=4.933597415 podStartE2EDuration="49.885949371s" podCreationTimestamp="2026-02-18 11:51:55 +0000 UTC" firstStartedPulling="2026-02-18 11:51:58.908505733 +0000 UTC m=+153.310607049" lastFinishedPulling="2026-02-18 11:52:43.860857689 +0000 UTC m=+198.262959005" observedRunningTime="2026-02-18 11:52:44.881606436 +0000 UTC m=+199.283707782" watchObservedRunningTime="2026-02-18 11:52:44.885949371 +0000 UTC m=+199.288050687" Feb 18 11:52:45 crc kubenswrapper[4717]: I0218 11:52:45.872249 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" event={"ID":"e1ad78b6-b637-4a43-9391-1f5821d4b8bc","Type":"ContainerStarted","Data":"43782a8eb40589a008001d422ef5d053d63860b06fa106245dd3bf24e4e15293"} Feb 18 11:52:45 crc kubenswrapper[4717]: I0218 11:52:45.872632 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:45 crc kubenswrapper[4717]: I0218 11:52:45.874667 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c9n76" event={"ID":"9250b3da-040d-4f0c-84d0-5d795bf3479d","Type":"ContainerStarted","Data":"8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5"} Feb 18 11:52:45 crc kubenswrapper[4717]: I0218 11:52:45.881854 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:45 crc kubenswrapper[4717]: I0218 11:52:45.888441 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" podStartSLOduration=10.888419152 podStartE2EDuration="10.888419152s" podCreationTimestamp="2026-02-18 11:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:52:45.887607837 +0000 UTC m=+200.289709143" watchObservedRunningTime="2026-02-18 11:52:45.888419152 +0000 UTC m=+200.290520468" Feb 18 11:52:45 crc kubenswrapper[4717]: I0218 11:52:45.941840 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c9n76" podStartSLOduration=5.089149319 podStartE2EDuration="50.941820441s" podCreationTimestamp="2026-02-18 11:51:55 +0000 UTC" firstStartedPulling="2026-02-18 11:51:58.907446373 +0000 UTC m=+153.309547689" lastFinishedPulling="2026-02-18 11:52:44.760117485 +0000 UTC m=+199.162218811" observedRunningTime="2026-02-18 11:52:45.938296782 +0000 UTC m=+200.340398118" watchObservedRunningTime="2026-02-18 11:52:45.941820441 +0000 UTC m=+200.343921757" Feb 18 11:52:46 crc kubenswrapper[4717]: I0218 11:52:46.681929 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:52:46 crc kubenswrapper[4717]: I0218 11:52:46.681971 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:52:46 crc kubenswrapper[4717]: I0218 11:52:46.681983 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:52:46 crc kubenswrapper[4717]: I0218 11:52:46.681992 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:52:47 crc kubenswrapper[4717]: I0218 11:52:47.669360 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:47 crc kubenswrapper[4717]: I0218 11:52:47.669680 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:47 crc kubenswrapper[4717]: I0218 11:52:47.669367 4717 patch_prober.go:28] interesting pod/downloads-7954f5f757-s9dzs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 18 11:52:47 crc kubenswrapper[4717]: I0218 11:52:47.669784 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s9dzs" podUID="0e8734b5-4294-4091-b377-680aa4178a19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 18 11:52:47 crc kubenswrapper[4717]: I0218 11:52:47.887071 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ksxg" event={"ID":"e1b37906-b51d-4abf-be9c-8607a92dfa40","Type":"ContainerStarted","Data":"7ddd4d207a9dc33d61c0f01faf9b924e3abc36b4ad3e9744256533c6c87ae620"} Feb 18 11:52:47 crc kubenswrapper[4717]: I0218 11:52:47.906018 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6ksxg" podStartSLOduration=5.386362462 podStartE2EDuration="52.905998908s" podCreationTimestamp="2026-02-18 11:51:55 +0000 UTC" firstStartedPulling="2026-02-18 11:51:59.016166565 +0000 UTC m=+153.418267881" lastFinishedPulling="2026-02-18 11:52:46.535803011 +0000 UTC m=+200.937904327" observedRunningTime="2026-02-18 11:52:47.90250315 +0000 UTC m=+202.304604466" watchObservedRunningTime="2026-02-18 11:52:47.905998908 +0000 UTC m=+202.308100224" Feb 18 11:52:48 crc kubenswrapper[4717]: I0218 11:52:48.106138 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-c9n76" podUID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerName="registry-server" probeResult="failure" output=< Feb 18 11:52:48 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 11:52:48 crc kubenswrapper[4717]: > Feb 18 11:52:48 crc kubenswrapper[4717]: I0218 11:52:48.106586 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-m2cjl" podUID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerName="registry-server" probeResult="failure" output=< Feb 18 11:52:48 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 11:52:48 crc kubenswrapper[4717]: > Feb 18 11:52:50 crc kubenswrapper[4717]: I0218 11:52:50.910671 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f7dc" event={"ID":"1df54bb8-8456-4c72-8cd4-49abc687ba4d","Type":"ContainerStarted","Data":"f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280"} Feb 18 11:52:52 crc kubenswrapper[4717]: I0218 11:52:52.926632 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mwxp" event={"ID":"3b07ffbf-aa95-48ef-baa5-68fb036483b7","Type":"ContainerStarted","Data":"a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800"} Feb 18 11:52:53 crc kubenswrapper[4717]: I0218 11:52:53.934859 4717 generic.go:334] "Generic (PLEG): container finished" podID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerID="a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800" exitCode=0 Feb 18 11:52:53 crc kubenswrapper[4717]: I0218 11:52:53.934937 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mwxp" event={"ID":"3b07ffbf-aa95-48ef-baa5-68fb036483b7","Type":"ContainerDied","Data":"a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800"} Feb 18 11:52:53 crc kubenswrapper[4717]: I0218 11:52:53.949085 4717 generic.go:334] "Generic (PLEG): container finished" podID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerID="f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280" exitCode=0 Feb 18 11:52:53 crc kubenswrapper[4717]: I0218 11:52:53.949143 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f7dc" event={"ID":"1df54bb8-8456-4c72-8cd4-49abc687ba4d","Type":"ContainerDied","Data":"f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280"} Feb 18 11:52:55 crc kubenswrapper[4717]: I0218 11:52:55.136058 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6698fbf897-82z8t"] Feb 18 11:52:55 crc kubenswrapper[4717]: I0218 11:52:55.136381 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" podUID="beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" containerName="controller-manager" containerID="cri-o://24a81c149258312a76c44e74b0043a68f8a4a28fcacb1c75c00f9dfdae9a7281" gracePeriod=30 Feb 18 11:52:55 crc kubenswrapper[4717]: I0218 11:52:55.173840 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf"] Feb 18 11:52:55 crc kubenswrapper[4717]: I0218 11:52:55.174046 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" podUID="e1ad78b6-b637-4a43-9391-1f5821d4b8bc" containerName="route-controller-manager" containerID="cri-o://43782a8eb40589a008001d422ef5d053d63860b06fa106245dd3bf24e4e15293" gracePeriod=30 Feb 18 11:52:55 crc kubenswrapper[4717]: I0218 11:52:55.962595 4717 generic.go:334] "Generic (PLEG): container finished" podID="beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" containerID="24a81c149258312a76c44e74b0043a68f8a4a28fcacb1c75c00f9dfdae9a7281" exitCode=0 Feb 18 11:52:55 crc kubenswrapper[4717]: I0218 11:52:55.962701 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" event={"ID":"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d","Type":"ContainerDied","Data":"24a81c149258312a76c44e74b0043a68f8a4a28fcacb1c75c00f9dfdae9a7281"} Feb 18 11:52:55 crc kubenswrapper[4717]: I0218 11:52:55.964897 4717 generic.go:334] "Generic (PLEG): container finished" podID="e1ad78b6-b637-4a43-9391-1f5821d4b8bc" containerID="43782a8eb40589a008001d422ef5d053d63860b06fa106245dd3bf24e4e15293" exitCode=0 Feb 18 11:52:55 crc kubenswrapper[4717]: I0218 11:52:55.964934 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" event={"ID":"e1ad78b6-b637-4a43-9391-1f5821d4b8bc","Type":"ContainerDied","Data":"43782a8eb40589a008001d422ef5d053d63860b06fa106245dd3bf24e4e15293"} Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.398656 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.398750 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.448701 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.734802 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.747965 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.755059 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.757657 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.782532 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.790947 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68bd8b45db-2k46r"] Feb 18 11:52:56 crc kubenswrapper[4717]: E0218 11:52:56.791209 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" containerName="controller-manager" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.791223 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" containerName="controller-manager" Feb 18 11:52:56 crc kubenswrapper[4717]: E0218 11:52:56.791235 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ad78b6-b637-4a43-9391-1f5821d4b8bc" containerName="route-controller-manager" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.791242 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ad78b6-b637-4a43-9391-1f5821d4b8bc" containerName="route-controller-manager" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.791405 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ad78b6-b637-4a43-9391-1f5821d4b8bc" containerName="route-controller-manager" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.791420 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" containerName="controller-manager" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.791829 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.799460 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68bd8b45db-2k46r"] Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.820113 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.848744 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xdx6\" (UniqueName: \"kubernetes.io/projected/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-kube-api-access-9xdx6\") pod \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.848795 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-config\") pod \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.848839 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdsd4\" (UniqueName: \"kubernetes.io/projected/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-kube-api-access-bdsd4\") pod \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.848881 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-client-ca\") pod \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.848910 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-proxy-ca-bundles\") pod \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.848951 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-client-ca\") pod \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.848994 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-config\") pod \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.849060 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-serving-cert\") pod \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\" (UID: \"e1ad78b6-b637-4a43-9391-1f5821d4b8bc\") " Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.849116 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-serving-cert\") pod \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\" (UID: \"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d\") " Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.849296 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c49qb\" (UniqueName: \"kubernetes.io/projected/5ebf329c-e2e9-41ca-aecc-540b29707768-kube-api-access-c49qb\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.849353 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-config\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.849381 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-client-ca\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.849425 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-proxy-ca-bundles\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.849452 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebf329c-e2e9-41ca-aecc-540b29707768-serving-cert\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.850469 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" (UID: "beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.850597 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-client-ca" (OuterVolumeSpecName: "client-ca") pod "beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" (UID: "beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.850733 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-config" (OuterVolumeSpecName: "config") pod "e1ad78b6-b637-4a43-9391-1f5821d4b8bc" (UID: "e1ad78b6-b637-4a43-9391-1f5821d4b8bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.851052 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-client-ca" (OuterVolumeSpecName: "client-ca") pod "e1ad78b6-b637-4a43-9391-1f5821d4b8bc" (UID: "e1ad78b6-b637-4a43-9391-1f5821d4b8bc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.851074 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-config" (OuterVolumeSpecName: "config") pod "beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" (UID: "beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.855185 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-kube-api-access-9xdx6" (OuterVolumeSpecName: "kube-api-access-9xdx6") pod "e1ad78b6-b637-4a43-9391-1f5821d4b8bc" (UID: "e1ad78b6-b637-4a43-9391-1f5821d4b8bc"). InnerVolumeSpecName "kube-api-access-9xdx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.855611 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e1ad78b6-b637-4a43-9391-1f5821d4b8bc" (UID: "e1ad78b6-b637-4a43-9391-1f5821d4b8bc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.856387 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-kube-api-access-bdsd4" (OuterVolumeSpecName: "kube-api-access-bdsd4") pod "beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" (UID: "beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d"). InnerVolumeSpecName "kube-api-access-bdsd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.866935 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" (UID: "beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.950844 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-config\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.950890 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-client-ca\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.950910 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-proxy-ca-bundles\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.950941 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebf329c-e2e9-41ca-aecc-540b29707768-serving-cert\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.950987 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c49qb\" (UniqueName: \"kubernetes.io/projected/5ebf329c-e2e9-41ca-aecc-540b29707768-kube-api-access-c49qb\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.951027 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdsd4\" (UniqueName: \"kubernetes.io/projected/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-kube-api-access-bdsd4\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.951037 4717 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.951047 4717 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.951055 4717 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.951064 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.951072 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.951080 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.951091 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xdx6\" (UniqueName: \"kubernetes.io/projected/e1ad78b6-b637-4a43-9391-1f5821d4b8bc-kube-api-access-9xdx6\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.951103 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.952676 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-config\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.953233 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-client-ca\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.954342 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-proxy-ca-bundles\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.958619 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebf329c-e2e9-41ca-aecc-540b29707768-serving-cert\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.969970 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c49qb\" (UniqueName: \"kubernetes.io/projected/5ebf329c-e2e9-41ca-aecc-540b29707768-kube-api-access-c49qb\") pod \"controller-manager-68bd8b45db-2k46r\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.971284 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.971303 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6698fbf897-82z8t" event={"ID":"beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d","Type":"ContainerDied","Data":"d50e381e91830043431c96ed2964e799bbcdc06f3bede6e718049d963fa47aa5"} Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.971359 4717 scope.go:117] "RemoveContainer" containerID="24a81c149258312a76c44e74b0043a68f8a4a28fcacb1c75c00f9dfdae9a7281" Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.973140 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" event={"ID":"e1ad78b6-b637-4a43-9391-1f5821d4b8bc","Type":"ContainerDied","Data":"e45e8b6ad3545c8b72ca1272be1f4461c0a631234bc49c80ae960024b573876d"} Feb 18 11:52:56 crc kubenswrapper[4717]: I0218 11:52:56.973205 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf" Feb 18 11:52:57 crc kubenswrapper[4717]: I0218 11:52:57.008303 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6698fbf897-82z8t"] Feb 18 11:52:57 crc kubenswrapper[4717]: I0218 11:52:57.012104 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6698fbf897-82z8t"] Feb 18 11:52:57 crc kubenswrapper[4717]: I0218 11:52:57.021578 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf"] Feb 18 11:52:57 crc kubenswrapper[4717]: I0218 11:52:57.023747 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:52:57 crc kubenswrapper[4717]: I0218 11:52:57.024553 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69cf96d799-cxngf"] Feb 18 11:52:57 crc kubenswrapper[4717]: I0218 11:52:57.050379 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d" path="/var/lib/kubelet/pods/beabc1f9-7aca-4dd0-8cfe-2ee20ff7690d/volumes" Feb 18 11:52:57 crc kubenswrapper[4717]: I0218 11:52:57.051207 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1ad78b6-b637-4a43-9391-1f5821d4b8bc" path="/var/lib/kubelet/pods/e1ad78b6-b637-4a43-9391-1f5821d4b8bc/volumes" Feb 18 11:52:57 crc kubenswrapper[4717]: I0218 11:52:57.112060 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:52:57 crc kubenswrapper[4717]: I0218 11:52:57.373391 4717 scope.go:117] "RemoveContainer" containerID="43782a8eb40589a008001d422ef5d053d63860b06fa106245dd3bf24e4e15293" Feb 18 11:52:57 crc kubenswrapper[4717]: I0218 11:52:57.682123 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-s9dzs" Feb 18 11:52:58 crc kubenswrapper[4717]: I0218 11:52:58.262992 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m2cjl"] Feb 18 11:52:58 crc kubenswrapper[4717]: I0218 11:52:58.263211 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m2cjl" podUID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerName="registry-server" containerID="cri-o://4629c5e551491c5a2875cbf0afc3a5146c1f51da2c18009fea2cfa83e58a9deb" gracePeriod=2 Feb 18 11:52:58 crc kubenswrapper[4717]: I0218 11:52:58.991572 4717 generic.go:334] "Generic (PLEG): container finished" podID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerID="4629c5e551491c5a2875cbf0afc3a5146c1f51da2c18009fea2cfa83e58a9deb" exitCode=0 Feb 18 11:52:58 crc kubenswrapper[4717]: I0218 11:52:58.991612 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2cjl" event={"ID":"154937ce-02f4-41f5-a061-fb7890e7cf40","Type":"ContainerDied","Data":"4629c5e551491c5a2875cbf0afc3a5146c1f51da2c18009fea2cfa83e58a9deb"} Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.335069 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt"] Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.336033 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.337917 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.338671 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.339873 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.340066 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.340456 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.342832 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.347742 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt"] Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.397739 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-config\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.397807 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt9bb\" (UniqueName: \"kubernetes.io/projected/0584ba33-ee0a-4969-9a97-55d9c4086601-kube-api-access-pt9bb\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.397850 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0584ba33-ee0a-4969-9a97-55d9c4086601-serving-cert\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.397867 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-client-ca\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.499329 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0584ba33-ee0a-4969-9a97-55d9c4086601-serving-cert\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.500031 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-client-ca\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.500098 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-config\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.500145 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt9bb\" (UniqueName: \"kubernetes.io/projected/0584ba33-ee0a-4969-9a97-55d9c4086601-kube-api-access-pt9bb\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.500407 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-client-ca\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.514547 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0584ba33-ee0a-4969-9a97-55d9c4086601-serving-cert\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.519242 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt9bb\" (UniqueName: \"kubernetes.io/projected/0584ba33-ee0a-4969-9a97-55d9c4086601-kube-api-access-pt9bb\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.574433 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-config\") pod \"route-controller-manager-6444d9bcbb-vhlrt\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:52:59 crc kubenswrapper[4717]: I0218 11:52:59.712799 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.334515 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68bd8b45db-2k46r"] Feb 18 11:53:00 crc kubenswrapper[4717]: W0218 11:53:00.597183 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ebf329c_e2e9_41ca_aecc_540b29707768.slice/crio-b520ac011ad3b2464d47711db366b1189d698ab8914b80ca5454d710f984417d WatchSource:0}: Error finding container b520ac011ad3b2464d47711db366b1189d698ab8914b80ca5454d710f984417d: Status 404 returned error can't find the container with id b520ac011ad3b2464d47711db366b1189d698ab8914b80ca5454d710f984417d Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.673316 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.815384 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-utilities\") pod \"154937ce-02f4-41f5-a061-fb7890e7cf40\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.815739 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgjvb\" (UniqueName: \"kubernetes.io/projected/154937ce-02f4-41f5-a061-fb7890e7cf40-kube-api-access-tgjvb\") pod \"154937ce-02f4-41f5-a061-fb7890e7cf40\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.815833 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-catalog-content\") pod \"154937ce-02f4-41f5-a061-fb7890e7cf40\" (UID: \"154937ce-02f4-41f5-a061-fb7890e7cf40\") " Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.817347 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-utilities" (OuterVolumeSpecName: "utilities") pod "154937ce-02f4-41f5-a061-fb7890e7cf40" (UID: "154937ce-02f4-41f5-a061-fb7890e7cf40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.827549 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/154937ce-02f4-41f5-a061-fb7890e7cf40-kube-api-access-tgjvb" (OuterVolumeSpecName: "kube-api-access-tgjvb") pod "154937ce-02f4-41f5-a061-fb7890e7cf40" (UID: "154937ce-02f4-41f5-a061-fb7890e7cf40"). InnerVolumeSpecName "kube-api-access-tgjvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.909920 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "154937ce-02f4-41f5-a061-fb7890e7cf40" (UID: "154937ce-02f4-41f5-a061-fb7890e7cf40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.936615 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.936655 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgjvb\" (UniqueName: \"kubernetes.io/projected/154937ce-02f4-41f5-a061-fb7890e7cf40-kube-api-access-tgjvb\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.936667 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/154937ce-02f4-41f5-a061-fb7890e7cf40-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:00 crc kubenswrapper[4717]: I0218 11:53:00.992886 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt"] Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.029890 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mwxp" event={"ID":"3b07ffbf-aa95-48ef-baa5-68fb036483b7","Type":"ContainerStarted","Data":"c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711"} Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.033596 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swdf" event={"ID":"739737ee-803a-478a-a7f2-de797ffeca2a","Type":"ContainerStarted","Data":"c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e"} Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.040320 4717 patch_prober.go:28] interesting pod/controller-manager-68bd8b45db-2k46r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.040394 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" podUID="5ebf329c-e2e9-41ca-aecc-540b29707768" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.045772 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.045813 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" event={"ID":"5ebf329c-e2e9-41ca-aecc-540b29707768","Type":"ContainerStarted","Data":"b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d"} Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.045836 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" event={"ID":"5ebf329c-e2e9-41ca-aecc-540b29707768","Type":"ContainerStarted","Data":"b520ac011ad3b2464d47711db366b1189d698ab8914b80ca5454d710f984417d"} Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.048754 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f7dc" event={"ID":"1df54bb8-8456-4c72-8cd4-49abc687ba4d","Type":"ContainerStarted","Data":"f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab"} Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.053177 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2cjl" event={"ID":"154937ce-02f4-41f5-a061-fb7890e7cf40","Type":"ContainerDied","Data":"f0139ced8b027db48697856698855f47734df57778fe861c0d663f343319ec32"} Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.053270 4717 scope.go:117] "RemoveContainer" containerID="4629c5e551491c5a2875cbf0afc3a5146c1f51da2c18009fea2cfa83e58a9deb" Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.053206 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2cjl" Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.055056 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" event={"ID":"0584ba33-ee0a-4969-9a97-55d9c4086601","Type":"ContainerStarted","Data":"739251ffec255ca305312b7c5ccc939a482e753e7282a132ec2db32c2871c754"} Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.065855 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7mwxp" podStartSLOduration=2.592924328 podStartE2EDuration="1m3.065838594s" podCreationTimestamp="2026-02-18 11:51:58 +0000 UTC" firstStartedPulling="2026-02-18 11:52:00.108536502 +0000 UTC m=+154.510637818" lastFinishedPulling="2026-02-18 11:53:00.581450768 +0000 UTC m=+214.983552084" observedRunningTime="2026-02-18 11:53:01.065494223 +0000 UTC m=+215.467595539" watchObservedRunningTime="2026-02-18 11:53:01.065838594 +0000 UTC m=+215.467939900" Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.077176 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6fmv" event={"ID":"0ee5d22d-8884-4563-8329-c475346f3a03","Type":"ContainerStarted","Data":"cf751783a5390166b45fdc4360b319a38bdecb260b178c551c7ca26e30749d74"} Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.086516 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bvgx" event={"ID":"fb729e3d-5019-4004-876e-c5d39e77e97e","Type":"ContainerStarted","Data":"db193766575fa2b9e5ec498bd333fb38727ff98c44164f9c03c8d2596282c08d"} Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.088457 4717 scope.go:117] "RemoveContainer" containerID="7f43ca82909b91477189510d747363fa77ee915ac3644cb504cb45f989ef0bdf" Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.118807 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" podStartSLOduration=6.118786909 podStartE2EDuration="6.118786909s" podCreationTimestamp="2026-02-18 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:01.116681264 +0000 UTC m=+215.518782600" watchObservedRunningTime="2026-02-18 11:53:01.118786909 +0000 UTC m=+215.520888225" Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.135568 4717 scope.go:117] "RemoveContainer" containerID="50f64f8b1792894fb97656073756851bba6dc9ff0e3576a50ff04f123a1d3293" Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.148379 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5f7dc" podStartSLOduration=5.592138044 podStartE2EDuration="1m2.148358752s" podCreationTimestamp="2026-02-18 11:51:59 +0000 UTC" firstStartedPulling="2026-02-18 11:52:00.143428714 +0000 UTC m=+154.545530030" lastFinishedPulling="2026-02-18 11:52:56.699649422 +0000 UTC m=+211.101750738" observedRunningTime="2026-02-18 11:53:01.146506705 +0000 UTC m=+215.548608021" watchObservedRunningTime="2026-02-18 11:53:01.148358752 +0000 UTC m=+215.550460068" Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.199624 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m2cjl"] Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.203285 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m2cjl"] Feb 18 11:53:01 crc kubenswrapper[4717]: I0218 11:53:01.698638 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" podUID="518edf1a-e4f5-450a-90ff-151dc3106649" containerName="oauth-openshift" containerID="cri-o://7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e" gracePeriod=15 Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.096320 4717 generic.go:334] "Generic (PLEG): container finished" podID="739737ee-803a-478a-a7f2-de797ffeca2a" containerID="c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e" exitCode=0 Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.096382 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swdf" event={"ID":"739737ee-803a-478a-a7f2-de797ffeca2a","Type":"ContainerDied","Data":"c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e"} Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.101528 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" event={"ID":"0584ba33-ee0a-4969-9a97-55d9c4086601","Type":"ContainerStarted","Data":"5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992"} Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.102210 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.105876 4717 generic.go:334] "Generic (PLEG): container finished" podID="0ee5d22d-8884-4563-8329-c475346f3a03" containerID="cf751783a5390166b45fdc4360b319a38bdecb260b178c551c7ca26e30749d74" exitCode=0 Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.105913 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6fmv" event={"ID":"0ee5d22d-8884-4563-8329-c475346f3a03","Type":"ContainerDied","Data":"cf751783a5390166b45fdc4360b319a38bdecb260b178c551c7ca26e30749d74"} Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.110479 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.110747 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.160718 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" podStartSLOduration=7.16069934 podStartE2EDuration="7.16069934s" podCreationTimestamp="2026-02-18 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:02.15813372 +0000 UTC m=+216.560235046" watchObservedRunningTime="2026-02-18 11:53:02.16069934 +0000 UTC m=+216.562800666" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.896489 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.980084 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-cliconfig\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.980147 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-login\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.980183 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-router-certs\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.980235 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-provider-selection\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.980329 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-ocp-branding-template\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.980356 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-idp-0-file-data\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.980396 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-session\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.980416 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-error\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.980449 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-audit-policies\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.981530 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-service-ca\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.981560 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/518edf1a-e4f5-450a-90ff-151dc3106649-audit-dir\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.981582 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-serving-cert\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.981632 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj7dr\" (UniqueName: \"kubernetes.io/projected/518edf1a-e4f5-450a-90ff-151dc3106649-kube-api-access-nj7dr\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.981648 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-trusted-ca-bundle\") pod \"518edf1a-e4f5-450a-90ff-151dc3106649\" (UID: \"518edf1a-e4f5-450a-90ff-151dc3106649\") " Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.982277 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.982461 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.982501 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/518edf1a-e4f5-450a-90ff-151dc3106649-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.982636 4717 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/518edf1a-e4f5-450a-90ff-151dc3106649-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.982774 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.982785 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.984347 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:02 crc kubenswrapper[4717]: I0218 11:53:02.993596 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.001899 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.002602 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518edf1a-e4f5-450a-90ff-151dc3106649-kube-api-access-nj7dr" (OuterVolumeSpecName: "kube-api-access-nj7dr") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "kube-api-access-nj7dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.007424 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.007931 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.010556 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.010794 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.015053 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.016449 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.020233 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "518edf1a-e4f5-450a-90ff-151dc3106649" (UID: "518edf1a-e4f5-450a-90ff-151dc3106649"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.043711 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="154937ce-02f4-41f5-a061-fb7890e7cf40" path="/var/lib/kubelet/pods/154937ce-02f4-41f5-a061-fb7890e7cf40/volumes" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083810 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083849 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083866 4717 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083883 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083895 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083907 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj7dr\" (UniqueName: \"kubernetes.io/projected/518edf1a-e4f5-450a-90ff-151dc3106649-kube-api-access-nj7dr\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083918 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083928 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083940 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083954 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.083965 4717 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/518edf1a-e4f5-450a-90ff-151dc3106649-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.113220 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swdf" event={"ID":"739737ee-803a-478a-a7f2-de797ffeca2a","Type":"ContainerStarted","Data":"0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd"} Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.114895 4717 generic.go:334] "Generic (PLEG): container finished" podID="518edf1a-e4f5-450a-90ff-151dc3106649" containerID="7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e" exitCode=0 Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.114949 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.114960 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" event={"ID":"518edf1a-e4f5-450a-90ff-151dc3106649","Type":"ContainerDied","Data":"7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e"} Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.114984 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rzqs7" event={"ID":"518edf1a-e4f5-450a-90ff-151dc3106649","Type":"ContainerDied","Data":"92d5f1e703a5f20f348cebecf24e24b47685eb9300ea3e0c3985a525a181ab8d"} Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.115013 4717 scope.go:117] "RemoveContainer" containerID="7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.119035 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6fmv" event={"ID":"0ee5d22d-8884-4563-8329-c475346f3a03","Type":"ContainerStarted","Data":"cdb82536d8a178659451ab204eca77f8785168ea183cceda6ccdecbb9f976649"} Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.125775 4717 generic.go:334] "Generic (PLEG): container finished" podID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerID="db193766575fa2b9e5ec498bd333fb38727ff98c44164f9c03c8d2596282c08d" exitCode=0 Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.126382 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bvgx" event={"ID":"fb729e3d-5019-4004-876e-c5d39e77e97e","Type":"ContainerDied","Data":"db193766575fa2b9e5ec498bd333fb38727ff98c44164f9c03c8d2596282c08d"} Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.134842 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4swdf" podStartSLOduration=3.639932447 podStartE2EDuration="1m7.134820257s" podCreationTimestamp="2026-02-18 11:51:56 +0000 UTC" firstStartedPulling="2026-02-18 11:51:59.011089511 +0000 UTC m=+153.413190827" lastFinishedPulling="2026-02-18 11:53:02.505977321 +0000 UTC m=+216.908078637" observedRunningTime="2026-02-18 11:53:03.130957058 +0000 UTC m=+217.533058374" watchObservedRunningTime="2026-02-18 11:53:03.134820257 +0000 UTC m=+217.536921573" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.137641 4717 scope.go:117] "RemoveContainer" containerID="7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e" Feb 18 11:53:03 crc kubenswrapper[4717]: E0218 11:53:03.138852 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e\": container with ID starting with 7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e not found: ID does not exist" containerID="7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.138890 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e"} err="failed to get container status \"7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e\": rpc error: code = NotFound desc = could not find container \"7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e\": container with ID starting with 7d2539e2246ceca13c8124a69fb40237bbdaf5be157519649a3625f3c777e29e not found: ID does not exist" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.189710 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n6fmv" podStartSLOduration=3.611124133 podStartE2EDuration="1m6.189673901s" podCreationTimestamp="2026-02-18 11:51:57 +0000 UTC" firstStartedPulling="2026-02-18 11:52:00.045766106 +0000 UTC m=+154.447867422" lastFinishedPulling="2026-02-18 11:53:02.624315874 +0000 UTC m=+217.026417190" observedRunningTime="2026-02-18 11:53:03.178352931 +0000 UTC m=+217.580454247" watchObservedRunningTime="2026-02-18 11:53:03.189673901 +0000 UTC m=+217.591775217" Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.193338 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rzqs7"] Feb 18 11:53:03 crc kubenswrapper[4717]: I0218 11:53:03.194986 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rzqs7"] Feb 18 11:53:04 crc kubenswrapper[4717]: I0218 11:53:04.136280 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bvgx" event={"ID":"fb729e3d-5019-4004-876e-c5d39e77e97e","Type":"ContainerStarted","Data":"aaabb66559ab55486f70349a8c9f409fb242a151776e8f411bc6d314d8a63cee"} Feb 18 11:53:04 crc kubenswrapper[4717]: I0218 11:53:04.157142 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6bvgx" podStartSLOduration=2.687093692 podStartE2EDuration="1m6.157124801s" podCreationTimestamp="2026-02-18 11:51:58 +0000 UTC" firstStartedPulling="2026-02-18 11:52:00.048093333 +0000 UTC m=+154.450194639" lastFinishedPulling="2026-02-18 11:53:03.518124432 +0000 UTC m=+217.920225748" observedRunningTime="2026-02-18 11:53:04.153411187 +0000 UTC m=+218.555512503" watchObservedRunningTime="2026-02-18 11:53:04.157124801 +0000 UTC m=+218.559226117" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.043821 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518edf1a-e4f5-450a-90ff-151dc3106649" path="/var/lib/kubelet/pods/518edf1a-e4f5-450a-90ff-151dc3106649/volumes" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.343339 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7874f76df5-g6h29"] Feb 18 11:53:05 crc kubenswrapper[4717]: E0218 11:53:05.343560 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerName="extract-utilities" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.343571 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerName="extract-utilities" Feb 18 11:53:05 crc kubenswrapper[4717]: E0218 11:53:05.343579 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518edf1a-e4f5-450a-90ff-151dc3106649" containerName="oauth-openshift" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.343586 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="518edf1a-e4f5-450a-90ff-151dc3106649" containerName="oauth-openshift" Feb 18 11:53:05 crc kubenswrapper[4717]: E0218 11:53:05.343593 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerName="extract-content" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.343599 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerName="extract-content" Feb 18 11:53:05 crc kubenswrapper[4717]: E0218 11:53:05.343614 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerName="registry-server" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.343620 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerName="registry-server" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.343744 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="154937ce-02f4-41f5-a061-fb7890e7cf40" containerName="registry-server" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.343757 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="518edf1a-e4f5-450a-90ff-151dc3106649" containerName="oauth-openshift" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.344184 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.345928 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.348070 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.348218 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.348375 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.348527 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.350602 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.350819 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.350895 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.351286 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.352866 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.354407 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.355550 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.361842 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.362314 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.368501 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7874f76df5-g6h29"] Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.369663 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412178 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412241 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-audit-dir\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412280 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412300 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412323 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-template-login\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412340 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-audit-policies\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412397 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-session\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412417 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412446 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412462 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-template-error\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412480 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412497 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412515 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldkg\" (UniqueName: \"kubernetes.io/projected/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-kube-api-access-tldkg\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.412549 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.514052 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.514920 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-template-error\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.514953 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.514976 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.514995 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tldkg\" (UniqueName: \"kubernetes.io/projected/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-kube-api-access-tldkg\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.515030 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.515069 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.515087 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-audit-dir\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.515104 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.515122 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.515146 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-template-login\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.515164 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-audit-policies\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.515200 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-audit-dir\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.515205 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-session\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.515278 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.516329 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.516414 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-service-ca\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.517426 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.517981 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-audit-policies\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.523811 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-session\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.523891 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.524078 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.524197 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.524234 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.524384 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-system-router-certs\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.526859 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-template-error\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.527983 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-v4-0-config-user-template-login\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.539714 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tldkg\" (UniqueName: \"kubernetes.io/projected/8be94a47-2fe5-4c1b-8e6d-81719ecae59e-kube-api-access-tldkg\") pod \"oauth-openshift-7874f76df5-g6h29\" (UID: \"8be94a47-2fe5-4c1b-8e6d-81719ecae59e\") " pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:05 crc kubenswrapper[4717]: I0218 11:53:05.662485 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:06 crc kubenswrapper[4717]: I0218 11:53:06.125786 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7874f76df5-g6h29"] Feb 18 11:53:06 crc kubenswrapper[4717]: W0218 11:53:06.133682 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8be94a47_2fe5_4c1b_8e6d_81719ecae59e.slice/crio-e91bc4f1b4d9cd9480cf52b3bd204b5a88559a1f582561b12594f9b92ad1f372 WatchSource:0}: Error finding container e91bc4f1b4d9cd9480cf52b3bd204b5a88559a1f582561b12594f9b92ad1f372: Status 404 returned error can't find the container with id e91bc4f1b4d9cd9480cf52b3bd204b5a88559a1f582561b12594f9b92ad1f372 Feb 18 11:53:06 crc kubenswrapper[4717]: I0218 11:53:06.146330 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" event={"ID":"8be94a47-2fe5-4c1b-8e6d-81719ecae59e","Type":"ContainerStarted","Data":"e91bc4f1b4d9cd9480cf52b3bd204b5a88559a1f582561b12594f9b92ad1f372"} Feb 18 11:53:06 crc kubenswrapper[4717]: I0218 11:53:06.643651 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:53:06 crc kubenswrapper[4717]: I0218 11:53:06.644017 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:53:06 crc kubenswrapper[4717]: I0218 11:53:06.701249 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:53:07 crc kubenswrapper[4717]: I0218 11:53:07.153363 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" event={"ID":"8be94a47-2fe5-4c1b-8e6d-81719ecae59e","Type":"ContainerStarted","Data":"99f77c706d2e0e8c87c6636c8838827585e6e4646e4e9a3e05fb6048da5f8390"} Feb 18 11:53:07 crc kubenswrapper[4717]: I0218 11:53:07.154002 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:07 crc kubenswrapper[4717]: I0218 11:53:07.161718 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" Feb 18 11:53:07 crc kubenswrapper[4717]: I0218 11:53:07.177179 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7874f76df5-g6h29" podStartSLOduration=31.177159629 podStartE2EDuration="31.177159629s" podCreationTimestamp="2026-02-18 11:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:07.174138616 +0000 UTC m=+221.576239932" watchObservedRunningTime="2026-02-18 11:53:07.177159629 +0000 UTC m=+221.579260945" Feb 18 11:53:07 crc kubenswrapper[4717]: I0218 11:53:07.225329 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:53:08 crc kubenswrapper[4717]: I0218 11:53:08.212642 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:53:08 crc kubenswrapper[4717]: I0218 11:53:08.213345 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:53:08 crc kubenswrapper[4717]: I0218 11:53:08.267437 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:53:08 crc kubenswrapper[4717]: I0218 11:53:08.662292 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4swdf"] Feb 18 11:53:08 crc kubenswrapper[4717]: I0218 11:53:08.929209 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:53:08 crc kubenswrapper[4717]: I0218 11:53:08.929311 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:53:08 crc kubenswrapper[4717]: I0218 11:53:08.989061 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.086372 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.086654 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.172169 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4swdf" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" containerName="registry-server" containerID="cri-o://0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd" gracePeriod=2 Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.213783 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.221986 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.473340 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.473742 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.532802 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.721280 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.781020 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-utilities\") pod \"739737ee-803a-478a-a7f2-de797ffeca2a\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.781073 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-catalog-content\") pod \"739737ee-803a-478a-a7f2-de797ffeca2a\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.781135 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8r4d\" (UniqueName: \"kubernetes.io/projected/739737ee-803a-478a-a7f2-de797ffeca2a-kube-api-access-v8r4d\") pod \"739737ee-803a-478a-a7f2-de797ffeca2a\" (UID: \"739737ee-803a-478a-a7f2-de797ffeca2a\") " Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.782039 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-utilities" (OuterVolumeSpecName: "utilities") pod "739737ee-803a-478a-a7f2-de797ffeca2a" (UID: "739737ee-803a-478a-a7f2-de797ffeca2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.786867 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739737ee-803a-478a-a7f2-de797ffeca2a-kube-api-access-v8r4d" (OuterVolumeSpecName: "kube-api-access-v8r4d") pod "739737ee-803a-478a-a7f2-de797ffeca2a" (UID: "739737ee-803a-478a-a7f2-de797ffeca2a"). InnerVolumeSpecName "kube-api-access-v8r4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.841568 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "739737ee-803a-478a-a7f2-de797ffeca2a" (UID: "739737ee-803a-478a-a7f2-de797ffeca2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.882281 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8r4d\" (UniqueName: \"kubernetes.io/projected/739737ee-803a-478a-a7f2-de797ffeca2a-kube-api-access-v8r4d\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.882550 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:09 crc kubenswrapper[4717]: I0218 11:53:09.882614 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/739737ee-803a-478a-a7f2-de797ffeca2a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.124911 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6bvgx" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerName="registry-server" probeResult="failure" output=< Feb 18 11:53:10 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 11:53:10 crc kubenswrapper[4717]: > Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.176951 4717 generic.go:334] "Generic (PLEG): container finished" podID="739737ee-803a-478a-a7f2-de797ffeca2a" containerID="0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd" exitCode=0 Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.177058 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4swdf" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.177090 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swdf" event={"ID":"739737ee-803a-478a-a7f2-de797ffeca2a","Type":"ContainerDied","Data":"0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd"} Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.177150 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4swdf" event={"ID":"739737ee-803a-478a-a7f2-de797ffeca2a","Type":"ContainerDied","Data":"b6ee8298f8a48b6a15c7cdb36b089071e12836759fded7522fd97386e1cc27ea"} Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.177175 4717 scope.go:117] "RemoveContainer" containerID="0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.198689 4717 scope.go:117] "RemoveContainer" containerID="c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.207951 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4swdf"] Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.211077 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4swdf"] Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.230863 4717 scope.go:117] "RemoveContainer" containerID="ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.231139 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.246228 4717 scope.go:117] "RemoveContainer" containerID="0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd" Feb 18 11:53:10 crc kubenswrapper[4717]: E0218 11:53:10.247494 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd\": container with ID starting with 0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd not found: ID does not exist" containerID="0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.247533 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd"} err="failed to get container status \"0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd\": rpc error: code = NotFound desc = could not find container \"0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd\": container with ID starting with 0760fac7474c60ec92ace16778408099baa5b6b0643887bdae0b1c7adee390dd not found: ID does not exist" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.247563 4717 scope.go:117] "RemoveContainer" containerID="c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e" Feb 18 11:53:10 crc kubenswrapper[4717]: E0218 11:53:10.248018 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e\": container with ID starting with c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e not found: ID does not exist" containerID="c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.248047 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e"} err="failed to get container status \"c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e\": rpc error: code = NotFound desc = could not find container \"c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e\": container with ID starting with c8e082975b5c383264d023f2d935a7b61abab7c77fc7acbc6c10040c6b7df81e not found: ID does not exist" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.248066 4717 scope.go:117] "RemoveContainer" containerID="ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb" Feb 18 11:53:10 crc kubenswrapper[4717]: E0218 11:53:10.259852 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb\": container with ID starting with ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb not found: ID does not exist" containerID="ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb" Feb 18 11:53:10 crc kubenswrapper[4717]: I0218 11:53:10.259912 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb"} err="failed to get container status \"ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb\": rpc error: code = NotFound desc = could not find container \"ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb\": container with ID starting with ec3073ebe62d4b2031ee5451ff6637cbb54a0b3d18914ee6f18c2eb26fa324bb not found: ID does not exist" Feb 18 11:53:11 crc kubenswrapper[4717]: I0218 11:53:11.043455 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" path="/var/lib/kubelet/pods/739737ee-803a-478a-a7f2-de797ffeca2a/volumes" Feb 18 11:53:11 crc kubenswrapper[4717]: I0218 11:53:11.069610 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5f7dc"] Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.188956 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5f7dc" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerName="registry-server" containerID="cri-o://f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab" gracePeriod=2 Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.466297 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mwxp"] Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.466538 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7mwxp" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerName="registry-server" containerID="cri-o://c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711" gracePeriod=2 Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.665844 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.721915 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-utilities\") pod \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.722016 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmlpj\" (UniqueName: \"kubernetes.io/projected/1df54bb8-8456-4c72-8cd4-49abc687ba4d-kube-api-access-hmlpj\") pod \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.722080 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-catalog-content\") pod \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\" (UID: \"1df54bb8-8456-4c72-8cd4-49abc687ba4d\") " Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.724306 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-utilities" (OuterVolumeSpecName: "utilities") pod "1df54bb8-8456-4c72-8cd4-49abc687ba4d" (UID: "1df54bb8-8456-4c72-8cd4-49abc687ba4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.732712 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.741580 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df54bb8-8456-4c72-8cd4-49abc687ba4d-kube-api-access-hmlpj" (OuterVolumeSpecName: "kube-api-access-hmlpj") pod "1df54bb8-8456-4c72-8cd4-49abc687ba4d" (UID: "1df54bb8-8456-4c72-8cd4-49abc687ba4d"). InnerVolumeSpecName "kube-api-access-hmlpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.773196 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.773275 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.773331 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.773921 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.773986 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727" gracePeriod=600 Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.834343 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmlpj\" (UniqueName: \"kubernetes.io/projected/1df54bb8-8456-4c72-8cd4-49abc687ba4d-kube-api-access-hmlpj\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.845609 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1df54bb8-8456-4c72-8cd4-49abc687ba4d" (UID: "1df54bb8-8456-4c72-8cd4-49abc687ba4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.888892 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.935307 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn29z\" (UniqueName: \"kubernetes.io/projected/3b07ffbf-aa95-48ef-baa5-68fb036483b7-kube-api-access-mn29z\") pod \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.935622 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-catalog-content\") pod \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.935678 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-utilities\") pod \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\" (UID: \"3b07ffbf-aa95-48ef-baa5-68fb036483b7\") " Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.935947 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1df54bb8-8456-4c72-8cd4-49abc687ba4d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.936751 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-utilities" (OuterVolumeSpecName: "utilities") pod "3b07ffbf-aa95-48ef-baa5-68fb036483b7" (UID: "3b07ffbf-aa95-48ef-baa5-68fb036483b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.938866 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b07ffbf-aa95-48ef-baa5-68fb036483b7-kube-api-access-mn29z" (OuterVolumeSpecName: "kube-api-access-mn29z") pod "3b07ffbf-aa95-48ef-baa5-68fb036483b7" (UID: "3b07ffbf-aa95-48ef-baa5-68fb036483b7"). InnerVolumeSpecName "kube-api-access-mn29z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:12 crc kubenswrapper[4717]: I0218 11:53:12.956908 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b07ffbf-aa95-48ef-baa5-68fb036483b7" (UID: "3b07ffbf-aa95-48ef-baa5-68fb036483b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.037227 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.037280 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn29z\" (UniqueName: \"kubernetes.io/projected/3b07ffbf-aa95-48ef-baa5-68fb036483b7-kube-api-access-mn29z\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.037292 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b07ffbf-aa95-48ef-baa5-68fb036483b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.198183 4717 generic.go:334] "Generic (PLEG): container finished" podID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerID="c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711" exitCode=0 Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.198284 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mwxp" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.198284 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mwxp" event={"ID":"3b07ffbf-aa95-48ef-baa5-68fb036483b7","Type":"ContainerDied","Data":"c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711"} Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.198667 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mwxp" event={"ID":"3b07ffbf-aa95-48ef-baa5-68fb036483b7","Type":"ContainerDied","Data":"c144a3bcd7d650e0b88ec103f16cd97c451839e9dd016ba135406733c381cca6"} Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.198709 4717 scope.go:117] "RemoveContainer" containerID="c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.201531 4717 generic.go:334] "Generic (PLEG): container finished" podID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerID="f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab" exitCode=0 Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.201600 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f7dc" event={"ID":"1df54bb8-8456-4c72-8cd4-49abc687ba4d","Type":"ContainerDied","Data":"f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab"} Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.202079 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f7dc" event={"ID":"1df54bb8-8456-4c72-8cd4-49abc687ba4d","Type":"ContainerDied","Data":"0c9d8585de690f1821869ac9f23809374bd7a22434707ce79d796ddf0fbaa3b2"} Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.201780 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5f7dc" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.204744 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727" exitCode=0 Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.204778 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727"} Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.216508 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mwxp"] Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.219055 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mwxp"] Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.219113 4717 scope.go:117] "RemoveContainer" containerID="a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.231770 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5f7dc"] Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.235274 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5f7dc"] Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.240716 4717 scope.go:117] "RemoveContainer" containerID="aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.253983 4717 scope.go:117] "RemoveContainer" containerID="c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711" Feb 18 11:53:13 crc kubenswrapper[4717]: E0218 11:53:13.254378 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711\": container with ID starting with c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711 not found: ID does not exist" containerID="c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.254425 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711"} err="failed to get container status \"c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711\": rpc error: code = NotFound desc = could not find container \"c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711\": container with ID starting with c73dbcb212e14b7107aa944776df518bd71725aeb80a67d07937416cf05e2711 not found: ID does not exist" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.254456 4717 scope.go:117] "RemoveContainer" containerID="a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800" Feb 18 11:53:13 crc kubenswrapper[4717]: E0218 11:53:13.254755 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800\": container with ID starting with a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800 not found: ID does not exist" containerID="a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.254778 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800"} err="failed to get container status \"a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800\": rpc error: code = NotFound desc = could not find container \"a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800\": container with ID starting with a69ddd2c43e43a799a39eaf1e16687f5d896775545e45276419aa6b19e1f0800 not found: ID does not exist" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.254791 4717 scope.go:117] "RemoveContainer" containerID="aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f" Feb 18 11:53:13 crc kubenswrapper[4717]: E0218 11:53:13.256196 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f\": container with ID starting with aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f not found: ID does not exist" containerID="aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.256223 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f"} err="failed to get container status \"aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f\": rpc error: code = NotFound desc = could not find container \"aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f\": container with ID starting with aca524bdac9fa82fded5725add16bbf50e70e2f1d835016117af2dca36ba9a7f not found: ID does not exist" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.256241 4717 scope.go:117] "RemoveContainer" containerID="f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.273199 4717 scope.go:117] "RemoveContainer" containerID="f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.289369 4717 scope.go:117] "RemoveContainer" containerID="c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.304357 4717 scope.go:117] "RemoveContainer" containerID="f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab" Feb 18 11:53:13 crc kubenswrapper[4717]: E0218 11:53:13.304729 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab\": container with ID starting with f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab not found: ID does not exist" containerID="f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.304767 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab"} err="failed to get container status \"f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab\": rpc error: code = NotFound desc = could not find container \"f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab\": container with ID starting with f9d278871db3e25ba7482d6ab135c49e2601ee30ebd7c87a6acbd6f1892308ab not found: ID does not exist" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.304793 4717 scope.go:117] "RemoveContainer" containerID="f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280" Feb 18 11:53:13 crc kubenswrapper[4717]: E0218 11:53:13.305231 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280\": container with ID starting with f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280 not found: ID does not exist" containerID="f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.305283 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280"} err="failed to get container status \"f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280\": rpc error: code = NotFound desc = could not find container \"f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280\": container with ID starting with f304b6b71f8a978c224b57c14be4422a0bd7842789ed582a6888d06701e4e280 not found: ID does not exist" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.305316 4717 scope.go:117] "RemoveContainer" containerID="c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0" Feb 18 11:53:13 crc kubenswrapper[4717]: E0218 11:53:13.305575 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0\": container with ID starting with c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0 not found: ID does not exist" containerID="c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0" Feb 18 11:53:13 crc kubenswrapper[4717]: I0218 11:53:13.305600 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0"} err="failed to get container status \"c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0\": rpc error: code = NotFound desc = could not find container \"c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0\": container with ID starting with c265e597b41f433e321fe733e3cb0879f5d5219a733017f9d18fee96accad6e0 not found: ID does not exist" Feb 18 11:53:14 crc kubenswrapper[4717]: I0218 11:53:14.213862 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"94e1dd922b2fb688b0271b6031b400dcd65ef629725f95285a8e4f6c5fddd404"} Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.054965 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" path="/var/lib/kubelet/pods/1df54bb8-8456-4c72-8cd4-49abc687ba4d/volumes" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.056254 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" path="/var/lib/kubelet/pods/3b07ffbf-aa95-48ef-baa5-68fb036483b7/volumes" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.128351 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68bd8b45db-2k46r"] Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.128837 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" podUID="5ebf329c-e2e9-41ca-aecc-540b29707768" containerName="controller-manager" containerID="cri-o://b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d" gracePeriod=30 Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.243174 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt"] Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.243393 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" podUID="0584ba33-ee0a-4969-9a97-55d9c4086601" containerName="route-controller-manager" containerID="cri-o://5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992" gracePeriod=30 Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.677419 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.756636 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.773970 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-config\") pod \"0584ba33-ee0a-4969-9a97-55d9c4086601\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.774070 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt9bb\" (UniqueName: \"kubernetes.io/projected/0584ba33-ee0a-4969-9a97-55d9c4086601-kube-api-access-pt9bb\") pod \"0584ba33-ee0a-4969-9a97-55d9c4086601\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.774146 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-client-ca\") pod \"0584ba33-ee0a-4969-9a97-55d9c4086601\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.774184 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0584ba33-ee0a-4969-9a97-55d9c4086601-serving-cert\") pod \"0584ba33-ee0a-4969-9a97-55d9c4086601\" (UID: \"0584ba33-ee0a-4969-9a97-55d9c4086601\") " Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.775914 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-config" (OuterVolumeSpecName: "config") pod "0584ba33-ee0a-4969-9a97-55d9c4086601" (UID: "0584ba33-ee0a-4969-9a97-55d9c4086601"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.776038 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-client-ca" (OuterVolumeSpecName: "client-ca") pod "0584ba33-ee0a-4969-9a97-55d9c4086601" (UID: "0584ba33-ee0a-4969-9a97-55d9c4086601"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.783420 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0584ba33-ee0a-4969-9a97-55d9c4086601-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0584ba33-ee0a-4969-9a97-55d9c4086601" (UID: "0584ba33-ee0a-4969-9a97-55d9c4086601"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.783512 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0584ba33-ee0a-4969-9a97-55d9c4086601-kube-api-access-pt9bb" (OuterVolumeSpecName: "kube-api-access-pt9bb") pod "0584ba33-ee0a-4969-9a97-55d9c4086601" (UID: "0584ba33-ee0a-4969-9a97-55d9c4086601"). InnerVolumeSpecName "kube-api-access-pt9bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.875326 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebf329c-e2e9-41ca-aecc-540b29707768-serving-cert\") pod \"5ebf329c-e2e9-41ca-aecc-540b29707768\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.875384 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-proxy-ca-bundles\") pod \"5ebf329c-e2e9-41ca-aecc-540b29707768\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.875446 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c49qb\" (UniqueName: \"kubernetes.io/projected/5ebf329c-e2e9-41ca-aecc-540b29707768-kube-api-access-c49qb\") pod \"5ebf329c-e2e9-41ca-aecc-540b29707768\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.875471 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-config\") pod \"5ebf329c-e2e9-41ca-aecc-540b29707768\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.875581 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-client-ca\") pod \"5ebf329c-e2e9-41ca-aecc-540b29707768\" (UID: \"5ebf329c-e2e9-41ca-aecc-540b29707768\") " Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.875849 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt9bb\" (UniqueName: \"kubernetes.io/projected/0584ba33-ee0a-4969-9a97-55d9c4086601-kube-api-access-pt9bb\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.875865 4717 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.875877 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0584ba33-ee0a-4969-9a97-55d9c4086601-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.875891 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0584ba33-ee0a-4969-9a97-55d9c4086601-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.876573 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ebf329c-e2e9-41ca-aecc-540b29707768" (UID: "5ebf329c-e2e9-41ca-aecc-540b29707768"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.876601 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5ebf329c-e2e9-41ca-aecc-540b29707768" (UID: "5ebf329c-e2e9-41ca-aecc-540b29707768"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.876750 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-config" (OuterVolumeSpecName: "config") pod "5ebf329c-e2e9-41ca-aecc-540b29707768" (UID: "5ebf329c-e2e9-41ca-aecc-540b29707768"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.878876 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ebf329c-e2e9-41ca-aecc-540b29707768-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ebf329c-e2e9-41ca-aecc-540b29707768" (UID: "5ebf329c-e2e9-41ca-aecc-540b29707768"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.879325 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebf329c-e2e9-41ca-aecc-540b29707768-kube-api-access-c49qb" (OuterVolumeSpecName: "kube-api-access-c49qb") pod "5ebf329c-e2e9-41ca-aecc-540b29707768" (UID: "5ebf329c-e2e9-41ca-aecc-540b29707768"). InnerVolumeSpecName "kube-api-access-c49qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.977588 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c49qb\" (UniqueName: \"kubernetes.io/projected/5ebf329c-e2e9-41ca-aecc-540b29707768-kube-api-access-c49qb\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.977638 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.977649 4717 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.977660 4717 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ebf329c-e2e9-41ca-aecc-540b29707768-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:15 crc kubenswrapper[4717]: I0218 11:53:15.977669 4717 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ebf329c-e2e9-41ca-aecc-540b29707768-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.239761 4717 generic.go:334] "Generic (PLEG): container finished" podID="5ebf329c-e2e9-41ca-aecc-540b29707768" containerID="b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d" exitCode=0 Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.239855 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" event={"ID":"5ebf329c-e2e9-41ca-aecc-540b29707768","Type":"ContainerDied","Data":"b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d"} Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.239887 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" event={"ID":"5ebf329c-e2e9-41ca-aecc-540b29707768","Type":"ContainerDied","Data":"b520ac011ad3b2464d47711db366b1189d698ab8914b80ca5454d710f984417d"} Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.239906 4717 scope.go:117] "RemoveContainer" containerID="b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.239998 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68bd8b45db-2k46r" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.241728 4717 generic.go:334] "Generic (PLEG): container finished" podID="0584ba33-ee0a-4969-9a97-55d9c4086601" containerID="5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992" exitCode=0 Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.241753 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" event={"ID":"0584ba33-ee0a-4969-9a97-55d9c4086601","Type":"ContainerDied","Data":"5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992"} Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.241769 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" event={"ID":"0584ba33-ee0a-4969-9a97-55d9c4086601","Type":"ContainerDied","Data":"739251ffec255ca305312b7c5ccc939a482e753e7282a132ec2db32c2871c754"} Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.241804 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.276114 4717 scope.go:117] "RemoveContainer" containerID="b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.276884 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d\": container with ID starting with b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d not found: ID does not exist" containerID="b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.276915 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d"} err="failed to get container status \"b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d\": rpc error: code = NotFound desc = could not find container \"b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d\": container with ID starting with b50274e4a8df969506dbae83ea11f77b239b8b937dc7a190daa87339ed28b32d not found: ID does not exist" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.276935 4717 scope.go:117] "RemoveContainer" containerID="5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.286998 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt"] Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.290606 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6444d9bcbb-vhlrt"] Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.296916 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68bd8b45db-2k46r"] Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.299508 4717 scope.go:117] "RemoveContainer" containerID="5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.299900 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992\": container with ID starting with 5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992 not found: ID does not exist" containerID="5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.299937 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992"} err="failed to get container status \"5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992\": rpc error: code = NotFound desc = could not find container \"5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992\": container with ID starting with 5b8f541e435e7f91194886739d3729d18fc238d1ca389a29206c0a1320c44992 not found: ID does not exist" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.301500 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68bd8b45db-2k46r"] Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.352436 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm"] Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353615 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerName="extract-utilities" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353644 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerName="extract-utilities" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353654 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerName="extract-content" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353662 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerName="extract-content" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353670 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerName="extract-utilities" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353677 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerName="extract-utilities" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353727 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" containerName="extract-content" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353738 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" containerName="extract-content" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353748 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerName="registry-server" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353753 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerName="registry-server" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353761 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0584ba33-ee0a-4969-9a97-55d9c4086601" containerName="route-controller-manager" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353767 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0584ba33-ee0a-4969-9a97-55d9c4086601" containerName="route-controller-manager" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353773 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" containerName="registry-server" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353781 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" containerName="registry-server" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353792 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerName="registry-server" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353798 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerName="registry-server" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353807 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerName="extract-content" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353813 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerName="extract-content" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353820 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebf329c-e2e9-41ca-aecc-540b29707768" containerName="controller-manager" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353826 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebf329c-e2e9-41ca-aecc-540b29707768" containerName="controller-manager" Feb 18 11:53:16 crc kubenswrapper[4717]: E0218 11:53:16.353835 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" containerName="extract-utilities" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.353841 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" containerName="extract-utilities" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.354022 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ebf329c-e2e9-41ca-aecc-540b29707768" containerName="controller-manager" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.354037 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df54bb8-8456-4c72-8cd4-49abc687ba4d" containerName="registry-server" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.354044 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b07ffbf-aa95-48ef-baa5-68fb036483b7" containerName="registry-server" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.354054 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="739737ee-803a-478a-a7f2-de797ffeca2a" containerName="registry-server" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.354062 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0584ba33-ee0a-4969-9a97-55d9c4086601" containerName="route-controller-manager" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.354929 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls"] Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.355095 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.360618 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.361175 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.363406 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.363563 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.363662 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.363839 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.364219 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.364512 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.364724 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.364770 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.365111 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.365941 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm"] Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.367383 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.367571 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.369541 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.369779 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls"] Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.381848 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldvrj\" (UniqueName: \"kubernetes.io/projected/5c58a5c5-6924-4f78-b8f7-19134258eea5-kube-api-access-ldvrj\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.381901 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c58a5c5-6924-4f78-b8f7-19134258eea5-config\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.381925 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c58a5c5-6924-4f78-b8f7-19134258eea5-serving-cert\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.381948 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c58a5c5-6924-4f78-b8f7-19134258eea5-client-ca\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.381989 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c58a5c5-6924-4f78-b8f7-19134258eea5-proxy-ca-bundles\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.482978 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldvrj\" (UniqueName: \"kubernetes.io/projected/5c58a5c5-6924-4f78-b8f7-19134258eea5-kube-api-access-ldvrj\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.483073 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a208743-fc9b-4be2-b4e2-114c29d7e158-config\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.483110 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c58a5c5-6924-4f78-b8f7-19134258eea5-config\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.483137 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8zgl\" (UniqueName: \"kubernetes.io/projected/7a208743-fc9b-4be2-b4e2-114c29d7e158-kube-api-access-b8zgl\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.483163 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c58a5c5-6924-4f78-b8f7-19134258eea5-serving-cert\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.483184 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a208743-fc9b-4be2-b4e2-114c29d7e158-client-ca\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.483208 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c58a5c5-6924-4f78-b8f7-19134258eea5-client-ca\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.483234 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a208743-fc9b-4be2-b4e2-114c29d7e158-serving-cert\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.483298 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c58a5c5-6924-4f78-b8f7-19134258eea5-proxy-ca-bundles\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.484286 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c58a5c5-6924-4f78-b8f7-19134258eea5-client-ca\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.484337 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c58a5c5-6924-4f78-b8f7-19134258eea5-proxy-ca-bundles\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.484920 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c58a5c5-6924-4f78-b8f7-19134258eea5-config\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.487153 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c58a5c5-6924-4f78-b8f7-19134258eea5-serving-cert\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.498162 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldvrj\" (UniqueName: \"kubernetes.io/projected/5c58a5c5-6924-4f78-b8f7-19134258eea5-kube-api-access-ldvrj\") pod \"controller-manager-68fc7ffdbf-tktdm\" (UID: \"5c58a5c5-6924-4f78-b8f7-19134258eea5\") " pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.584237 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a208743-fc9b-4be2-b4e2-114c29d7e158-serving-cert\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.584367 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a208743-fc9b-4be2-b4e2-114c29d7e158-config\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.584396 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8zgl\" (UniqueName: \"kubernetes.io/projected/7a208743-fc9b-4be2-b4e2-114c29d7e158-kube-api-access-b8zgl\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.584422 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a208743-fc9b-4be2-b4e2-114c29d7e158-client-ca\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.585188 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a208743-fc9b-4be2-b4e2-114c29d7e158-client-ca\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.585459 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a208743-fc9b-4be2-b4e2-114c29d7e158-config\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.587993 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a208743-fc9b-4be2-b4e2-114c29d7e158-serving-cert\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.599645 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8zgl\" (UniqueName: \"kubernetes.io/projected/7a208743-fc9b-4be2-b4e2-114c29d7e158-kube-api-access-b8zgl\") pod \"route-controller-manager-5949788c79-k4xls\" (UID: \"7a208743-fc9b-4be2-b4e2-114c29d7e158\") " pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.680113 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:16 crc kubenswrapper[4717]: I0218 11:53:16.687335 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.045805 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0584ba33-ee0a-4969-9a97-55d9c4086601" path="/var/lib/kubelet/pods/0584ba33-ee0a-4969-9a97-55d9c4086601/volumes" Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.046655 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebf329c-e2e9-41ca-aecc-540b29707768" path="/var/lib/kubelet/pods/5ebf329c-e2e9-41ca-aecc-540b29707768/volumes" Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.060854 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm"] Feb 18 11:53:17 crc kubenswrapper[4717]: W0218 11:53:17.064837 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c58a5c5_6924_4f78_b8f7_19134258eea5.slice/crio-dc051a8accf2fbce01d3791b874903cae4b2255a87e61531658c0b9d0429ed23 WatchSource:0}: Error finding container dc051a8accf2fbce01d3791b874903cae4b2255a87e61531658c0b9d0429ed23: Status 404 returned error can't find the container with id dc051a8accf2fbce01d3791b874903cae4b2255a87e61531658c0b9d0429ed23 Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.105731 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls"] Feb 18 11:53:17 crc kubenswrapper[4717]: W0218 11:53:17.115175 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a208743_fc9b_4be2_b4e2_114c29d7e158.slice/crio-dd084e56efae0e00a4c385a66c606764c9c55c74c8071ee29bd45e55a99f197b WatchSource:0}: Error finding container dd084e56efae0e00a4c385a66c606764c9c55c74c8071ee29bd45e55a99f197b: Status 404 returned error can't find the container with id dd084e56efae0e00a4c385a66c606764c9c55c74c8071ee29bd45e55a99f197b Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.252556 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" event={"ID":"5c58a5c5-6924-4f78-b8f7-19134258eea5","Type":"ContainerStarted","Data":"acf8527354e3a7587e6ad83bce6c9addb7b0b31f7000a592d35dc8a6ba134817"} Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.252615 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" event={"ID":"5c58a5c5-6924-4f78-b8f7-19134258eea5","Type":"ContainerStarted","Data":"dc051a8accf2fbce01d3791b874903cae4b2255a87e61531658c0b9d0429ed23"} Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.252634 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.255733 4717 patch_prober.go:28] interesting pod/controller-manager-68fc7ffdbf-tktdm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.255777 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" podUID="5c58a5c5-6924-4f78-b8f7-19134258eea5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.257948 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" event={"ID":"7a208743-fc9b-4be2-b4e2-114c29d7e158","Type":"ContainerStarted","Data":"53d00c88b7c0146fab0bd8c868636d6344ed29400c51fddb8291fb93e449b02b"} Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.257983 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" event={"ID":"7a208743-fc9b-4be2-b4e2-114c29d7e158","Type":"ContainerStarted","Data":"dd084e56efae0e00a4c385a66c606764c9c55c74c8071ee29bd45e55a99f197b"} Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.258470 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.260077 4717 patch_prober.go:28] interesting pod/route-controller-manager-5949788c79-k4xls container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.260112 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" podUID="7a208743-fc9b-4be2-b4e2-114c29d7e158" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.271710 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" podStartSLOduration=2.271695231 podStartE2EDuration="2.271695231s" podCreationTimestamp="2026-02-18 11:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:17.268625356 +0000 UTC m=+231.670726912" watchObservedRunningTime="2026-02-18 11:53:17.271695231 +0000 UTC m=+231.673796547" Feb 18 11:53:17 crc kubenswrapper[4717]: I0218 11:53:17.288476 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" podStartSLOduration=2.2884579179999998 podStartE2EDuration="2.288457918s" podCreationTimestamp="2026-02-18 11:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:53:17.287497939 +0000 UTC m=+231.689599255" watchObservedRunningTime="2026-02-18 11:53:17.288457918 +0000 UTC m=+231.690559234" Feb 18 11:53:18 crc kubenswrapper[4717]: I0218 11:53:18.269213 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5949788c79-k4xls" Feb 18 11:53:18 crc kubenswrapper[4717]: I0218 11:53:18.270313 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68fc7ffdbf-tktdm" Feb 18 11:53:19 crc kubenswrapper[4717]: I0218 11:53:19.125237 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:53:19 crc kubenswrapper[4717]: I0218 11:53:19.164749 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.263485 4717 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.264368 4717 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.264646 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430" gracePeriod=15 Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.264784 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.264868 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65" gracePeriod=15 Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.264859 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa" gracePeriod=15 Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.264964 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b" gracePeriod=15 Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.265030 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5" gracePeriod=15 Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267435 4717 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 11:53:20 crc kubenswrapper[4717]: E0218 11:53:20.267678 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267699 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 11:53:20 crc kubenswrapper[4717]: E0218 11:53:20.267751 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267758 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 11:53:20 crc kubenswrapper[4717]: E0218 11:53:20.267768 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267774 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 11:53:20 crc kubenswrapper[4717]: E0218 11:53:20.267795 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267800 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:53:20 crc kubenswrapper[4717]: E0218 11:53:20.267809 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267815 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 11:53:20 crc kubenswrapper[4717]: E0218 11:53:20.267823 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267829 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267927 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267937 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267944 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267957 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267964 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.267971 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 11:53:20 crc kubenswrapper[4717]: E0218 11:53:20.268060 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.268067 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.294167 4717 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]log ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]api-openshift-apiserver-available ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]api-openshift-oauth-apiserver-available ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]informer-sync ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/priority-and-fairness-filter ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/start-apiextensions-informers ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/start-apiextensions-controllers ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/crd-informer-synced ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/start-system-namespaces-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/rbac/bootstrap-roles ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/bootstrap-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/start-kube-aggregator-informers ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/apiservice-registration-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/apiservice-discovery-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]autoregister-completion ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/apiservice-openapi-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 18 11:53:20 crc kubenswrapper[4717]: [-]shutdown failed: reason withheld Feb 18 11:53:20 crc kubenswrapper[4717]: readyz check failed Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.294227 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.370598 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.370662 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.370686 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.370717 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.370739 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.370779 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.370806 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.370838 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471494 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471540 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471563 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471586 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471607 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471681 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471706 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471721 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471778 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471814 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471834 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471855 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471875 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471901 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471900 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.471922 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.671578 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.673352 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.674332 4717 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa" exitCode=0 Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.674355 4717 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b" exitCode=0 Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.674362 4717 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65" exitCode=0 Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.674370 4717 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5" exitCode=2 Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.674420 4717 scope.go:117] "RemoveContainer" containerID="af4c7317f75607ac50e734e1eec43f1e9e09532ce272bed02d9fb3c92e46fea4" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.676420 4717 generic.go:334] "Generic (PLEG): container finished" podID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" containerID="64584b732ebc084ec0122736343ed8b212307c5c93ff49daa40db5862540e36e" exitCode=0 Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.676487 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d","Type":"ContainerDied","Data":"64584b732ebc084ec0122736343ed8b212307c5c93ff49daa40db5862540e36e"} Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.677224 4717 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:20 crc kubenswrapper[4717]: I0218 11:53:20.677496 4717 status_manager.go:851] "Failed to get status for pod" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:21 crc kubenswrapper[4717]: I0218 11:53:21.684641 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.005485 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.006505 4717 status_manager.go:851] "Failed to get status for pod" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.192349 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-var-lock\") pod \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.192411 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kube-api-access\") pod \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.192449 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kubelet-dir\") pod \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\" (UID: \"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d\") " Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.192501 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-var-lock" (OuterVolumeSpecName: "var-lock") pod "11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" (UID: "11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.192665 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" (UID: "11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.194475 4717 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.194511 4717 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.213742 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" (UID: "11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.296117 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.631997 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.633174 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.633816 4717 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.634140 4717 status_manager.go:851] "Failed to get status for pod" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.694559 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.696073 4717 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430" exitCode=0 Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.696153 4717 scope.go:117] "RemoveContainer" containerID="be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.696194 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.698492 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d","Type":"ContainerDied","Data":"eb5de7f26e7ad32ec09f350c027db321950850ce07a3aad606392be30d48d3ea"} Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.698619 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5de7f26e7ad32ec09f350c027db321950850ce07a3aad606392be30d48d3ea" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.698549 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.703337 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.703401 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.703534 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.703629 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.703853 4717 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.703878 4717 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.710575 4717 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.710870 4717 status_manager.go:851] "Failed to get status for pod" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.711137 4717 scope.go:117] "RemoveContainer" containerID="7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.724076 4717 scope.go:117] "RemoveContainer" containerID="f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.735652 4717 scope.go:117] "RemoveContainer" containerID="979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.749454 4717 scope.go:117] "RemoveContainer" containerID="38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.762284 4717 scope.go:117] "RemoveContainer" containerID="c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.778893 4717 scope.go:117] "RemoveContainer" containerID="be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa" Feb 18 11:53:22 crc kubenswrapper[4717]: E0218 11:53:22.779236 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\": container with ID starting with be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa not found: ID does not exist" containerID="be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.779373 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa"} err="failed to get container status \"be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\": rpc error: code = NotFound desc = could not find container \"be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa\": container with ID starting with be15e043409c4c2a581c30463a4fa64c1e5cf98601e349b1f072b38a42e214fa not found: ID does not exist" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.779484 4717 scope.go:117] "RemoveContainer" containerID="7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b" Feb 18 11:53:22 crc kubenswrapper[4717]: E0218 11:53:22.779889 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\": container with ID starting with 7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b not found: ID does not exist" containerID="7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.779996 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b"} err="failed to get container status \"7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\": rpc error: code = NotFound desc = could not find container \"7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b\": container with ID starting with 7f588b8b8dce1977da766f9dcf58a87499a37b78b7d3d5fd1b6816ad08191b5b not found: ID does not exist" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.780095 4717 scope.go:117] "RemoveContainer" containerID="f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65" Feb 18 11:53:22 crc kubenswrapper[4717]: E0218 11:53:22.780430 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\": container with ID starting with f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65 not found: ID does not exist" containerID="f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.780527 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65"} err="failed to get container status \"f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\": rpc error: code = NotFound desc = could not find container \"f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65\": container with ID starting with f225fba283aff8ea5881b3d77bc6883088e03cf27f9f3ce8bb22984b642bfc65 not found: ID does not exist" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.780624 4717 scope.go:117] "RemoveContainer" containerID="979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5" Feb 18 11:53:22 crc kubenswrapper[4717]: E0218 11:53:22.780930 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\": container with ID starting with 979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5 not found: ID does not exist" containerID="979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.781031 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5"} err="failed to get container status \"979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\": rpc error: code = NotFound desc = could not find container \"979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5\": container with ID starting with 979244a498ea9021e4d58f037fcef73490b7482b6e951056bfea5d947fd242d5 not found: ID does not exist" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.781125 4717 scope.go:117] "RemoveContainer" containerID="38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430" Feb 18 11:53:22 crc kubenswrapper[4717]: E0218 11:53:22.781629 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\": container with ID starting with 38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430 not found: ID does not exist" containerID="38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.781653 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430"} err="failed to get container status \"38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\": rpc error: code = NotFound desc = could not find container \"38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430\": container with ID starting with 38abfabff205c1b81bb1dfad9f2a05e7b643bf1253a3d250bca97b646525b430 not found: ID does not exist" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.781690 4717 scope.go:117] "RemoveContainer" containerID="c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7" Feb 18 11:53:22 crc kubenswrapper[4717]: E0218 11:53:22.781967 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\": container with ID starting with c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7 not found: ID does not exist" containerID="c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.782074 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7"} err="failed to get container status \"c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\": rpc error: code = NotFound desc = could not find container \"c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7\": container with ID starting with c101316a01183b27aa0f81b99407a16190be97e82c476523c19b178b64c252b7 not found: ID does not exist" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.804697 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.805173 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:53:22 crc kubenswrapper[4717]: I0218 11:53:22.906731 4717 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:53:23 crc kubenswrapper[4717]: I0218 11:53:23.012311 4717 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:23 crc kubenswrapper[4717]: I0218 11:53:23.012726 4717 status_manager.go:851] "Failed to get status for pod" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:23 crc kubenswrapper[4717]: I0218 11:53:23.044202 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 11:53:25 crc kubenswrapper[4717]: E0218 11:53:25.295280 4717 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:25 crc kubenswrapper[4717]: I0218 11:53:25.296082 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:25 crc kubenswrapper[4717]: W0218 11:53:25.321710 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-eb26ff5393ba58754b0ddb6ed7a372c64c075b9445194e1ae83ba1eaf7e12a12 WatchSource:0}: Error finding container eb26ff5393ba58754b0ddb6ed7a372c64c075b9445194e1ae83ba1eaf7e12a12: Status 404 returned error can't find the container with id eb26ff5393ba58754b0ddb6ed7a372c64c075b9445194e1ae83ba1eaf7e12a12 Feb 18 11:53:25 crc kubenswrapper[4717]: E0218 11:53:25.324545 4717 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895551b90ebb36b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 11:53:25.324002155 +0000 UTC m=+239.726103471,LastTimestamp:2026-02-18 11:53:25.324002155 +0000 UTC m=+239.726103471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 11:53:25 crc kubenswrapper[4717]: I0218 11:53:25.718192 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b"} Feb 18 11:53:25 crc kubenswrapper[4717]: I0218 11:53:25.718566 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"eb26ff5393ba58754b0ddb6ed7a372c64c075b9445194e1ae83ba1eaf7e12a12"} Feb 18 11:53:25 crc kubenswrapper[4717]: E0218 11:53:25.719126 4717 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:53:25 crc kubenswrapper[4717]: I0218 11:53:25.719215 4717 status_manager.go:851] "Failed to get status for pod" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:27 crc kubenswrapper[4717]: I0218 11:53:27.039574 4717 status_manager.go:851] "Failed to get status for pod" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:27 crc kubenswrapper[4717]: E0218 11:53:27.794044 4717 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:27 crc kubenswrapper[4717]: E0218 11:53:27.794600 4717 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:27 crc kubenswrapper[4717]: E0218 11:53:27.794883 4717 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:27 crc kubenswrapper[4717]: E0218 11:53:27.795101 4717 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:27 crc kubenswrapper[4717]: E0218 11:53:27.795289 4717 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:27 crc kubenswrapper[4717]: I0218 11:53:27.795312 4717 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 11:53:27 crc kubenswrapper[4717]: E0218 11:53:27.795613 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Feb 18 11:53:27 crc kubenswrapper[4717]: E0218 11:53:27.997121 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Feb 18 11:53:28 crc kubenswrapper[4717]: E0218 11:53:28.344250 4717 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895551b90ebb36b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 11:53:25.324002155 +0000 UTC m=+239.726103471,LastTimestamp:2026-02-18 11:53:25.324002155 +0000 UTC m=+239.726103471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 11:53:28 crc kubenswrapper[4717]: E0218 11:53:28.397836 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Feb 18 11:53:29 crc kubenswrapper[4717]: E0218 11:53:29.198659 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Feb 18 11:53:30 crc kubenswrapper[4717]: E0218 11:53:30.799737 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="3.2s" Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.036643 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.037651 4717 status_manager.go:851] "Failed to get status for pod" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.052819 4717 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10bd33aa-1784-4c9c-aaae-2c0df3304785" Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.052855 4717 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10bd33aa-1784-4c9c-aaae-2c0df3304785" Feb 18 11:53:33 crc kubenswrapper[4717]: E0218 11:53:33.053355 4717 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.053926 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.764198 4717 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6775c38f62fce5042e26f3f7ab191656b40711cb62f76365385e484f893e2cd0" exitCode=0 Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.764290 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6775c38f62fce5042e26f3f7ab191656b40711cb62f76365385e484f893e2cd0"} Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.764524 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d89af547545a3ff2b8059bf8e5f210845f750c8ce445d1f0858a483bc751d9f4"} Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.764793 4717 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10bd33aa-1784-4c9c-aaae-2c0df3304785" Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.764804 4717 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10bd33aa-1784-4c9c-aaae-2c0df3304785" Feb 18 11:53:33 crc kubenswrapper[4717]: E0218 11:53:33.765433 4717 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:33 crc kubenswrapper[4717]: I0218 11:53:33.765509 4717 status_manager.go:851] "Failed to get status for pod" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Feb 18 11:53:34 crc kubenswrapper[4717]: E0218 11:53:34.001468 4717 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="6.4s" Feb 18 11:53:34 crc kubenswrapper[4717]: I0218 11:53:34.773004 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 11:53:34 crc kubenswrapper[4717]: I0218 11:53:34.773059 4717 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d" exitCode=1 Feb 18 11:53:34 crc kubenswrapper[4717]: I0218 11:53:34.773119 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d"} Feb 18 11:53:34 crc kubenswrapper[4717]: I0218 11:53:34.773600 4717 scope.go:117] "RemoveContainer" containerID="0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d" Feb 18 11:53:34 crc kubenswrapper[4717]: I0218 11:53:34.777434 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"32ec80aa21d1285b6a718fb2c2c412588cbaa9e0f60f2b75fa47334b90385876"} Feb 18 11:53:34 crc kubenswrapper[4717]: I0218 11:53:34.777474 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"594b8087a0e5efc5c02f663359b5db5e1cc1cb13a4c22613e5c75cd09efe79fd"} Feb 18 11:53:34 crc kubenswrapper[4717]: I0218 11:53:34.777488 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4be7571123741755f70a445815a9750b8b1541a6f0b002b3da24923bcbcda9f8"} Feb 18 11:53:34 crc kubenswrapper[4717]: I0218 11:53:34.777502 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2fab58f6deb10388412cb7c64e97f1d91c3d67514ee5d2d72fd24be6fd7140b5"} Feb 18 11:53:35 crc kubenswrapper[4717]: I0218 11:53:35.786014 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"69dc18c5685c413d468467db9a160bc7db590796f4de1cacff4a21ef726b0b0c"} Feb 18 11:53:35 crc kubenswrapper[4717]: I0218 11:53:35.786384 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:35 crc kubenswrapper[4717]: I0218 11:53:35.786318 4717 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10bd33aa-1784-4c9c-aaae-2c0df3304785" Feb 18 11:53:35 crc kubenswrapper[4717]: I0218 11:53:35.786403 4717 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10bd33aa-1784-4c9c-aaae-2c0df3304785" Feb 18 11:53:35 crc kubenswrapper[4717]: I0218 11:53:35.789022 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 11:53:35 crc kubenswrapper[4717]: I0218 11:53:35.789065 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"90096d237b25255e4778d3aa592ff2d80dd1217c56e8ec8fc64fef5b74953518"} Feb 18 11:53:38 crc kubenswrapper[4717]: I0218 11:53:38.054310 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:38 crc kubenswrapper[4717]: I0218 11:53:38.054656 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:38 crc kubenswrapper[4717]: I0218 11:53:38.059556 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:38 crc kubenswrapper[4717]: I0218 11:53:38.910074 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:53:38 crc kubenswrapper[4717]: I0218 11:53:38.910196 4717 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 11:53:38 crc kubenswrapper[4717]: I0218 11:53:38.910237 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 11:53:40 crc kubenswrapper[4717]: I0218 11:53:40.794423 4717 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:40 crc kubenswrapper[4717]: I0218 11:53:40.815535 4717 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10bd33aa-1784-4c9c-aaae-2c0df3304785" Feb 18 11:53:40 crc kubenswrapper[4717]: I0218 11:53:40.815579 4717 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10bd33aa-1784-4c9c-aaae-2c0df3304785" Feb 18 11:53:40 crc kubenswrapper[4717]: I0218 11:53:40.819106 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:53:40 crc kubenswrapper[4717]: I0218 11:53:40.821732 4717 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a32ab9ef-e671-498f-8b6d-002fb6a06be2" Feb 18 11:53:41 crc kubenswrapper[4717]: I0218 11:53:41.819834 4717 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10bd33aa-1784-4c9c-aaae-2c0df3304785" Feb 18 11:53:41 crc kubenswrapper[4717]: I0218 11:53:41.819863 4717 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10bd33aa-1784-4c9c-aaae-2c0df3304785" Feb 18 11:53:43 crc kubenswrapper[4717]: I0218 11:53:43.543543 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:53:47 crc kubenswrapper[4717]: I0218 11:53:47.054673 4717 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a32ab9ef-e671-498f-8b6d-002fb6a06be2" Feb 18 11:53:48 crc kubenswrapper[4717]: I0218 11:53:48.910051 4717 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 11:53:48 crc kubenswrapper[4717]: I0218 11:53:48.910129 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 11:53:50 crc kubenswrapper[4717]: I0218 11:53:50.133105 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 11:53:51 crc kubenswrapper[4717]: I0218 11:53:51.105336 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 11:53:51 crc kubenswrapper[4717]: I0218 11:53:51.841642 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 11:53:51 crc kubenswrapper[4717]: I0218 11:53:51.988714 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 11:53:52 crc kubenswrapper[4717]: I0218 11:53:52.037827 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 11:53:52 crc kubenswrapper[4717]: I0218 11:53:52.144804 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 11:53:52 crc kubenswrapper[4717]: I0218 11:53:52.172402 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 11:53:52 crc kubenswrapper[4717]: I0218 11:53:52.458230 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 11:53:52 crc kubenswrapper[4717]: I0218 11:53:52.512992 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 11:53:52 crc kubenswrapper[4717]: I0218 11:53:52.627130 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 11:53:52 crc kubenswrapper[4717]: I0218 11:53:52.775750 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 11:53:52 crc kubenswrapper[4717]: I0218 11:53:52.815527 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 11:53:52 crc kubenswrapper[4717]: I0218 11:53:52.820776 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 11:53:52 crc kubenswrapper[4717]: I0218 11:53:52.892388 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 11:53:53 crc kubenswrapper[4717]: I0218 11:53:53.013623 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 11:53:53 crc kubenswrapper[4717]: I0218 11:53:53.021630 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 11:53:53 crc kubenswrapper[4717]: I0218 11:53:53.174033 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 11:53:53 crc kubenswrapper[4717]: I0218 11:53:53.296462 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 11:53:53 crc kubenswrapper[4717]: I0218 11:53:53.666414 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 11:53:53 crc kubenswrapper[4717]: I0218 11:53:53.675926 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 11:53:53 crc kubenswrapper[4717]: I0218 11:53:53.805281 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 11:53:53 crc kubenswrapper[4717]: I0218 11:53:53.847049 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 11:53:53 crc kubenswrapper[4717]: I0218 11:53:53.942912 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.053852 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.068669 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.208717 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.214011 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.222635 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.280891 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.288382 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.518220 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.806995 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.830295 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.910045 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.935301 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 11:53:54 crc kubenswrapper[4717]: I0218 11:53:54.963666 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.009352 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.117897 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.181934 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.196650 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.221302 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.296728 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.299701 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.328130 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.538304 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.562805 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.633142 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.721998 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.776651 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.797009 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.816796 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.938941 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.953226 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.973111 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 11:53:55 crc kubenswrapper[4717]: I0218 11:53:55.979323 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.015134 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.108559 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.215958 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.218925 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.279905 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.316695 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.345984 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.456571 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.476154 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.534486 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.582843 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.618034 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.644887 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.721705 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.725303 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.792415 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 11:53:56 crc kubenswrapper[4717]: I0218 11:53:56.911609 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.001463 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.088750 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.163586 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.169370 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.312449 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.368376 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.464482 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.595201 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.615601 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.632390 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.654978 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.721409 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.753429 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.809168 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.849986 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.896385 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.898062 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 11:53:57 crc kubenswrapper[4717]: I0218 11:53:57.941758 4717 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.002401 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.040748 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.061703 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.143658 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.189343 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.254227 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.261743 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.348596 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.410851 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.473666 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.476375 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.508891 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.546083 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.869979 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.899986 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.910124 4717 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.910414 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.910610 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.913418 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"90096d237b25255e4778d3aa592ff2d80dd1217c56e8ec8fc64fef5b74953518"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.913886 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://90096d237b25255e4778d3aa592ff2d80dd1217c56e8ec8fc64fef5b74953518" gracePeriod=30 Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.981458 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 11:53:58 crc kubenswrapper[4717]: I0218 11:53:58.987431 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.085611 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.148525 4717 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.225189 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.310510 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.346322 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.416089 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.481497 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.487179 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.591061 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.699224 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.784025 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.793278 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.900065 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.946899 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.967487 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 11:53:59 crc kubenswrapper[4717]: I0218 11:53:59.988830 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.071324 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.144080 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.173194 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.175987 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.217112 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.248238 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.293385 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.313572 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.329140 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.397046 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.412425 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.433104 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.542578 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.573076 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.629155 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.633329 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.635655 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.653679 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.690588 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.754801 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.757876 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.871064 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.908654 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.939092 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.951241 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.986348 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.994896 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 11:54:00 crc kubenswrapper[4717]: I0218 11:54:00.998418 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.099680 4717 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.126452 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.127377 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.133362 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.170304 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.211008 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.289418 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.324522 4717 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.328589 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.328640 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.333953 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.334416 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.366862 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.366846428 podStartE2EDuration="21.366846428s" podCreationTimestamp="2026-02-18 11:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:01.349658296 +0000 UTC m=+275.751759612" watchObservedRunningTime="2026-02-18 11:54:01.366846428 +0000 UTC m=+275.768947744" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.384089 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.396096 4717 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.418400 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.442349 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.572970 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.738111 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.762139 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.772954 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.820718 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 11:54:01 crc kubenswrapper[4717]: I0218 11:54:01.926017 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.017825 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.096868 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.279378 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.376534 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.377876 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.385214 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.434429 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.441000 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.546600 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.551711 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.589686 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.596551 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.636167 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.636178 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.658674 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.718231 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.721797 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.772791 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.781808 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.790218 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.854603 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 11:54:02 crc kubenswrapper[4717]: I0218 11:54:02.925845 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.115126 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.247072 4717 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.247386 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b" gracePeriod=5 Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.272786 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.380806 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.405744 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.501420 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.510885 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.577641 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.603243 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.620744 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.674684 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.716786 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.722817 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 11:54:03 crc kubenswrapper[4717]: I0218 11:54:03.728484 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.012206 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.049082 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.189792 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.287902 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.371615 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.382934 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.420776 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.477181 4717 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.477899 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.485187 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.723888 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.797032 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.860340 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.886685 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 11:54:04 crc kubenswrapper[4717]: I0218 11:54:04.926766 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.061240 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.130371 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.184986 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.240574 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.267671 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.296387 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.312666 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.430954 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c9n76"] Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.431298 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c9n76" podUID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerName="registry-server" containerID="cri-o://8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5" gracePeriod=30 Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.442233 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6ksxg"] Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.442686 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6ksxg" podUID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerName="registry-server" containerID="cri-o://7ddd4d207a9dc33d61c0f01faf9b924e3abc36b4ad3e9744256533c6c87ae620" gracePeriod=30 Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.444786 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.451892 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtw2w"] Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.452285 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" podUID="016f064e-8db6-41ed-a2af-2d9ea9169703" containerName="marketplace-operator" containerID="cri-o://d4f7be783138ab978151c2f4ca1fb2c7758525c7c9847fa427738e3680fa34e9" gracePeriod=30 Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.457547 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.463252 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6fmv"] Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.463488 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n6fmv" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" containerName="registry-server" containerID="cri-o://cdb82536d8a178659451ab204eca77f8785168ea183cceda6ccdecbb9f976649" gracePeriod=30 Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.466216 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6bvgx"] Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.466532 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6bvgx" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerName="registry-server" containerID="cri-o://aaabb66559ab55486f70349a8c9f409fb242a151776e8f411bc6d314d8a63cee" gracePeriod=30 Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.528118 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.541204 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.544387 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.545579 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.734439 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.742611 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.935621 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.948797 4717 generic.go:334] "Generic (PLEG): container finished" podID="016f064e-8db6-41ed-a2af-2d9ea9169703" containerID="d4f7be783138ab978151c2f4ca1fb2c7758525c7c9847fa427738e3680fa34e9" exitCode=0 Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.948881 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" event={"ID":"016f064e-8db6-41ed-a2af-2d9ea9169703","Type":"ContainerDied","Data":"d4f7be783138ab978151c2f4ca1fb2c7758525c7c9847fa427738e3680fa34e9"} Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.950896 4717 generic.go:334] "Generic (PLEG): container finished" podID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerID="8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5" exitCode=0 Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.951013 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c9n76" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.951140 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c9n76" event={"ID":"9250b3da-040d-4f0c-84d0-5d795bf3479d","Type":"ContainerDied","Data":"8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5"} Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.951163 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c9n76" event={"ID":"9250b3da-040d-4f0c-84d0-5d795bf3479d","Type":"ContainerDied","Data":"22cff45d9424657d1376e59cafc1b2d18d65dd21ff353c2ee08840603bc3af13"} Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.951183 4717 scope.go:117] "RemoveContainer" containerID="8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.971643 4717 generic.go:334] "Generic (PLEG): container finished" podID="0ee5d22d-8884-4563-8329-c475346f3a03" containerID="cdb82536d8a178659451ab204eca77f8785168ea183cceda6ccdecbb9f976649" exitCode=0 Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.971746 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6fmv" event={"ID":"0ee5d22d-8884-4563-8329-c475346f3a03","Type":"ContainerDied","Data":"cdb82536d8a178659451ab204eca77f8785168ea183cceda6ccdecbb9f976649"} Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.986200 4717 scope.go:117] "RemoveContainer" containerID="ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811" Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.996288 4717 generic.go:334] "Generic (PLEG): container finished" podID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerID="aaabb66559ab55486f70349a8c9f409fb242a151776e8f411bc6d314d8a63cee" exitCode=0 Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.996360 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bvgx" event={"ID":"fb729e3d-5019-4004-876e-c5d39e77e97e","Type":"ContainerDied","Data":"aaabb66559ab55486f70349a8c9f409fb242a151776e8f411bc6d314d8a63cee"} Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.998242 4717 generic.go:334] "Generic (PLEG): container finished" podID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerID="7ddd4d207a9dc33d61c0f01faf9b924e3abc36b4ad3e9744256533c6c87ae620" exitCode=0 Feb 18 11:54:05 crc kubenswrapper[4717]: I0218 11:54:05.998284 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ksxg" event={"ID":"e1b37906-b51d-4abf-be9c-8607a92dfa40","Type":"ContainerDied","Data":"7ddd4d207a9dc33d61c0f01faf9b924e3abc36b4ad3e9744256533c6c87ae620"} Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.011637 4717 scope.go:117] "RemoveContainer" containerID="1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.027913 4717 scope.go:117] "RemoveContainer" containerID="8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5" Feb 18 11:54:06 crc kubenswrapper[4717]: E0218 11:54:06.028374 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5\": container with ID starting with 8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5 not found: ID does not exist" containerID="8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.028399 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5"} err="failed to get container status \"8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5\": rpc error: code = NotFound desc = could not find container \"8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5\": container with ID starting with 8173a5561dcdc7e460fba80f8c006c85d70327f83c603867124927d0124e39a5 not found: ID does not exist" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.028438 4717 scope.go:117] "RemoveContainer" containerID="ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811" Feb 18 11:54:06 crc kubenswrapper[4717]: E0218 11:54:06.028643 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811\": container with ID starting with ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811 not found: ID does not exist" containerID="ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.028660 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811"} err="failed to get container status \"ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811\": rpc error: code = NotFound desc = could not find container \"ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811\": container with ID starting with ac4c5f6000ad045bbaf0b3701e8034e31018d7e551e74cd0c2d35aabc2dcf811 not found: ID does not exist" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.028672 4717 scope.go:117] "RemoveContainer" containerID="1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc" Feb 18 11:54:06 crc kubenswrapper[4717]: E0218 11:54:06.028872 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc\": container with ID starting with 1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc not found: ID does not exist" containerID="1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.028889 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc"} err="failed to get container status \"1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc\": rpc error: code = NotFound desc = could not find container \"1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc\": container with ID starting with 1a07f9d3cf7eb662d7d47791fef1bc3ee7485cb3d9a72400edfaaed557a4b6dc not found: ID does not exist" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.052673 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-utilities\") pod \"9250b3da-040d-4f0c-84d0-5d795bf3479d\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.052747 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-catalog-content\") pod \"9250b3da-040d-4f0c-84d0-5d795bf3479d\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.052784 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48gl6\" (UniqueName: \"kubernetes.io/projected/9250b3da-040d-4f0c-84d0-5d795bf3479d-kube-api-access-48gl6\") pod \"9250b3da-040d-4f0c-84d0-5d795bf3479d\" (UID: \"9250b3da-040d-4f0c-84d0-5d795bf3479d\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.054821 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-utilities" (OuterVolumeSpecName: "utilities") pod "9250b3da-040d-4f0c-84d0-5d795bf3479d" (UID: "9250b3da-040d-4f0c-84d0-5d795bf3479d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.059411 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9250b3da-040d-4f0c-84d0-5d795bf3479d-kube-api-access-48gl6" (OuterVolumeSpecName: "kube-api-access-48gl6") pod "9250b3da-040d-4f0c-84d0-5d795bf3479d" (UID: "9250b3da-040d-4f0c-84d0-5d795bf3479d"). InnerVolumeSpecName "kube-api-access-48gl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.070238 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.074964 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.078785 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.091881 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.122543 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9250b3da-040d-4f0c-84d0-5d795bf3479d" (UID: "9250b3da-040d-4f0c-84d0-5d795bf3479d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.155090 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7x7z\" (UniqueName: \"kubernetes.io/projected/fb729e3d-5019-4004-876e-c5d39e77e97e-kube-api-access-k7x7z\") pod \"fb729e3d-5019-4004-876e-c5d39e77e97e\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.155223 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-utilities\") pod \"fb729e3d-5019-4004-876e-c5d39e77e97e\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.155248 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-trusted-ca\") pod \"016f064e-8db6-41ed-a2af-2d9ea9169703\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.155293 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz58r\" (UniqueName: \"kubernetes.io/projected/e1b37906-b51d-4abf-be9c-8607a92dfa40-kube-api-access-bz58r\") pod \"e1b37906-b51d-4abf-be9c-8607a92dfa40\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.155335 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84n7m\" (UniqueName: \"kubernetes.io/projected/016f064e-8db6-41ed-a2af-2d9ea9169703-kube-api-access-84n7m\") pod \"016f064e-8db6-41ed-a2af-2d9ea9169703\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.155376 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-catalog-content\") pod \"e1b37906-b51d-4abf-be9c-8607a92dfa40\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.156507 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-utilities" (OuterVolumeSpecName: "utilities") pod "fb729e3d-5019-4004-876e-c5d39e77e97e" (UID: "fb729e3d-5019-4004-876e-c5d39e77e97e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.157103 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "016f064e-8db6-41ed-a2af-2d9ea9169703" (UID: "016f064e-8db6-41ed-a2af-2d9ea9169703"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.157141 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-utilities\") pod \"e1b37906-b51d-4abf-be9c-8607a92dfa40\" (UID: \"e1b37906-b51d-4abf-be9c-8607a92dfa40\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.157243 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-catalog-content\") pod \"fb729e3d-5019-4004-876e-c5d39e77e97e\" (UID: \"fb729e3d-5019-4004-876e-c5d39e77e97e\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.163810 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-operator-metrics\") pod \"016f064e-8db6-41ed-a2af-2d9ea9169703\" (UID: \"016f064e-8db6-41ed-a2af-2d9ea9169703\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.157980 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-utilities" (OuterVolumeSpecName: "utilities") pod "e1b37906-b51d-4abf-be9c-8607a92dfa40" (UID: "e1b37906-b51d-4abf-be9c-8607a92dfa40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.159578 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016f064e-8db6-41ed-a2af-2d9ea9169703-kube-api-access-84n7m" (OuterVolumeSpecName: "kube-api-access-84n7m") pod "016f064e-8db6-41ed-a2af-2d9ea9169703" (UID: "016f064e-8db6-41ed-a2af-2d9ea9169703"). InnerVolumeSpecName "kube-api-access-84n7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.159716 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb729e3d-5019-4004-876e-c5d39e77e97e-kube-api-access-k7x7z" (OuterVolumeSpecName: "kube-api-access-k7x7z") pod "fb729e3d-5019-4004-876e-c5d39e77e97e" (UID: "fb729e3d-5019-4004-876e-c5d39e77e97e"). InnerVolumeSpecName "kube-api-access-k7x7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.160587 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b37906-b51d-4abf-be9c-8607a92dfa40-kube-api-access-bz58r" (OuterVolumeSpecName: "kube-api-access-bz58r") pod "e1b37906-b51d-4abf-be9c-8607a92dfa40" (UID: "e1b37906-b51d-4abf-be9c-8607a92dfa40"). InnerVolumeSpecName "kube-api-access-bz58r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.164284 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7x7z\" (UniqueName: \"kubernetes.io/projected/fb729e3d-5019-4004-876e-c5d39e77e97e-kube-api-access-k7x7z\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.164306 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.164318 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.164327 4717 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.164336 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz58r\" (UniqueName: \"kubernetes.io/projected/e1b37906-b51d-4abf-be9c-8607a92dfa40-kube-api-access-bz58r\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.164344 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48gl6\" (UniqueName: \"kubernetes.io/projected/9250b3da-040d-4f0c-84d0-5d795bf3479d-kube-api-access-48gl6\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.164352 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84n7m\" (UniqueName: \"kubernetes.io/projected/016f064e-8db6-41ed-a2af-2d9ea9169703-kube-api-access-84n7m\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.164364 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.164372 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9250b3da-040d-4f0c-84d0-5d795bf3479d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.167299 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "016f064e-8db6-41ed-a2af-2d9ea9169703" (UID: "016f064e-8db6-41ed-a2af-2d9ea9169703"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.169564 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.212778 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1b37906-b51d-4abf-be9c-8607a92dfa40" (UID: "e1b37906-b51d-4abf-be9c-8607a92dfa40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.266076 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-catalog-content\") pod \"0ee5d22d-8884-4563-8329-c475346f3a03\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.266179 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pthzp\" (UniqueName: \"kubernetes.io/projected/0ee5d22d-8884-4563-8329-c475346f3a03-kube-api-access-pthzp\") pod \"0ee5d22d-8884-4563-8329-c475346f3a03\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.266210 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-utilities\") pod \"0ee5d22d-8884-4563-8329-c475346f3a03\" (UID: \"0ee5d22d-8884-4563-8329-c475346f3a03\") " Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.266516 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1b37906-b51d-4abf-be9c-8607a92dfa40-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.266536 4717 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/016f064e-8db6-41ed-a2af-2d9ea9169703-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.267905 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-utilities" (OuterVolumeSpecName: "utilities") pod "0ee5d22d-8884-4563-8329-c475346f3a03" (UID: "0ee5d22d-8884-4563-8329-c475346f3a03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.271383 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee5d22d-8884-4563-8329-c475346f3a03-kube-api-access-pthzp" (OuterVolumeSpecName: "kube-api-access-pthzp") pod "0ee5d22d-8884-4563-8329-c475346f3a03" (UID: "0ee5d22d-8884-4563-8329-c475346f3a03"). InnerVolumeSpecName "kube-api-access-pthzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.279861 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c9n76"] Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.283821 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c9n76"] Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.299587 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb729e3d-5019-4004-876e-c5d39e77e97e" (UID: "fb729e3d-5019-4004-876e-c5d39e77e97e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.305453 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ee5d22d-8884-4563-8329-c475346f3a03" (UID: "0ee5d22d-8884-4563-8329-c475346f3a03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.321811 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.368226 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pthzp\" (UniqueName: \"kubernetes.io/projected/0ee5d22d-8884-4563-8329-c475346f3a03-kube-api-access-pthzp\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.368318 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.368331 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee5d22d-8884-4563-8329-c475346f3a03-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.368340 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb729e3d-5019-4004-876e-c5d39e77e97e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.415310 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.419978 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.442890 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.481519 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.512585 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.556538 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 11:54:06 crc kubenswrapper[4717]: I0218 11:54:06.875513 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.005941 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.006133 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qtw2w" event={"ID":"016f064e-8db6-41ed-a2af-2d9ea9169703","Type":"ContainerDied","Data":"0b666ea7decf42cecfdb7fbd86d95219da1eecd3d200fcfe91b7a8d381330899"} Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.006221 4717 scope.go:117] "RemoveContainer" containerID="d4f7be783138ab978151c2f4ca1fb2c7758525c7c9847fa427738e3680fa34e9" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.010571 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6fmv" event={"ID":"0ee5d22d-8884-4563-8329-c475346f3a03","Type":"ContainerDied","Data":"3f9663309aae98dbfc5580b7b071e87731d8b7a08f02a68771107e2ae3993ff3"} Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.010692 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6fmv" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.023689 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6bvgx" event={"ID":"fb729e3d-5019-4004-876e-c5d39e77e97e","Type":"ContainerDied","Data":"5d917d020a5fcf46a3b4ff6c19c3a5ac305383990c0730152af14731f93dd5f0"} Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.023873 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6bvgx" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.024968 4717 scope.go:117] "RemoveContainer" containerID="cdb82536d8a178659451ab204eca77f8785168ea183cceda6ccdecbb9f976649" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.042639 4717 scope.go:117] "RemoveContainer" containerID="cf751783a5390166b45fdc4360b319a38bdecb260b178c551c7ca26e30749d74" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.044455 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ksxg" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.072867 4717 scope.go:117] "RemoveContainer" containerID="68cd50bb6525a71d6977658cf31890898cf4ebc6ba614c4600a8f3ec02d32673" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.075447 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9250b3da-040d-4f0c-84d0-5d795bf3479d" path="/var/lib/kubelet/pods/9250b3da-040d-4f0c-84d0-5d795bf3479d/volumes" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.076609 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ksxg" event={"ID":"e1b37906-b51d-4abf-be9c-8607a92dfa40","Type":"ContainerDied","Data":"96fa754b0b28f0f006c01ddfb79192fa8a2cdc74a164e42276a51dfc7b1eb982"} Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.076650 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtw2w"] Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.078155 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qtw2w"] Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.091709 4717 scope.go:117] "RemoveContainer" containerID="aaabb66559ab55486f70349a8c9f409fb242a151776e8f411bc6d314d8a63cee" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.106985 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6fmv"] Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.112493 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6fmv"] Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.119067 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6bvgx"] Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.121560 4717 scope.go:117] "RemoveContainer" containerID="db193766575fa2b9e5ec498bd333fb38727ff98c44164f9c03c8d2596282c08d" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.124953 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6bvgx"] Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.131092 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6ksxg"] Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.136400 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6ksxg"] Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.140945 4717 scope.go:117] "RemoveContainer" containerID="e69fc61350f0d83561ffd2832c9aaf7963da7061c08a497fe3fb8d0b95e63a46" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.161775 4717 scope.go:117] "RemoveContainer" containerID="7ddd4d207a9dc33d61c0f01faf9b924e3abc36b4ad3e9744256533c6c87ae620" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.179027 4717 scope.go:117] "RemoveContainer" containerID="cd117612dc1bfcd594f6a57ca1a8832237a12941274a8ca9b811a8cb5b22ae37" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.194923 4717 scope.go:117] "RemoveContainer" containerID="35436349804c7ef36b7b8d6013fcc68e8f160e3ad2d94f0e2da3c83812f2d68a" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.346622 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 11:54:07 crc kubenswrapper[4717]: I0218 11:54:07.483054 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 11:54:08 crc kubenswrapper[4717]: I0218 11:54:08.151593 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 11:54:08 crc kubenswrapper[4717]: I0218 11:54:08.332304 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 11:54:08 crc kubenswrapper[4717]: I0218 11:54:08.720347 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 11:54:08 crc kubenswrapper[4717]: I0218 11:54:08.726531 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 11:54:08 crc kubenswrapper[4717]: I0218 11:54:08.835415 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 11:54:08 crc kubenswrapper[4717]: I0218 11:54:08.835632 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.000733 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.000821 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.000839 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.000873 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.000906 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.001121 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.001154 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.001171 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.001188 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.014148 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.042924 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="016f064e-8db6-41ed-a2af-2d9ea9169703" path="/var/lib/kubelet/pods/016f064e-8db6-41ed-a2af-2d9ea9169703/volumes" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.043795 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" path="/var/lib/kubelet/pods/0ee5d22d-8884-4563-8329-c475346f3a03/volumes" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.044610 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b37906-b51d-4abf-be9c-8607a92dfa40" path="/var/lib/kubelet/pods/e1b37906-b51d-4abf-be9c-8607a92dfa40/volumes" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.046578 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.047006 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" path="/var/lib/kubelet/pods/fb729e3d-5019-4004-876e-c5d39e77e97e/volumes" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.063461 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.063540 4717 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b" exitCode=137 Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.063605 4717 scope.go:117] "RemoveContainer" containerID="6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.063602 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.079639 4717 scope.go:117] "RemoveContainer" containerID="6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b" Feb 18 11:54:09 crc kubenswrapper[4717]: E0218 11:54:09.080105 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b\": container with ID starting with 6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b not found: ID does not exist" containerID="6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.080156 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b"} err="failed to get container status \"6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b\": rpc error: code = NotFound desc = could not find container \"6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b\": container with ID starting with 6360d306f44b2cf6f6af621116f4d33501007b3d6134eba404959d891ee5ae6b not found: ID does not exist" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.097912 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.102230 4717 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.102286 4717 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.102300 4717 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.102332 4717 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:09 crc kubenswrapper[4717]: I0218 11:54:09.102345 4717 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 11:54:10 crc kubenswrapper[4717]: I0218 11:54:10.674704 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 11:54:26 crc kubenswrapper[4717]: I0218 11:54:26.636396 4717 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 11:54:29 crc kubenswrapper[4717]: I0218 11:54:29.175244 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 11:54:29 crc kubenswrapper[4717]: I0218 11:54:29.177175 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 11:54:29 crc kubenswrapper[4717]: I0218 11:54:29.177215 4717 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="90096d237b25255e4778d3aa592ff2d80dd1217c56e8ec8fc64fef5b74953518" exitCode=137 Feb 18 11:54:29 crc kubenswrapper[4717]: I0218 11:54:29.177252 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"90096d237b25255e4778d3aa592ff2d80dd1217c56e8ec8fc64fef5b74953518"} Feb 18 11:54:29 crc kubenswrapper[4717]: I0218 11:54:29.177350 4717 scope.go:117] "RemoveContainer" containerID="0518de5efac3b9e834d1dd6719af6aa0dbf8dda57336e159a65f80b24afcad5d" Feb 18 11:54:30 crc kubenswrapper[4717]: I0218 11:54:30.184332 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 11:54:30 crc kubenswrapper[4717]: I0218 11:54:30.185438 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7590e54d90f0900ae05c9593b1445d42f6171a813e91f9f82ee59faa75a709dd"} Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.679547 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-82l5m"] Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.680826 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerName="extract-utilities" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.680931 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerName="extract-utilities" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681018 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681099 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681176 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681251 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681398 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerName="extract-utilities" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681485 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerName="extract-utilities" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681570 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681646 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681805 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerName="extract-content" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681820 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerName="extract-content" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681833 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerName="extract-content" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681840 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerName="extract-content" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681850 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" containerName="installer" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681858 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" containerName="installer" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681869 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerName="extract-content" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681877 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerName="extract-content" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681885 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" containerName="extract-content" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681893 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" containerName="extract-content" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681900 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681906 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681912 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016f064e-8db6-41ed-a2af-2d9ea9169703" containerName="marketplace-operator" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681918 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="016f064e-8db6-41ed-a2af-2d9ea9169703" containerName="marketplace-operator" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681925 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerName="extract-utilities" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681932 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerName="extract-utilities" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681940 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" containerName="extract-utilities" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681946 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" containerName="extract-utilities" Feb 18 11:54:32 crc kubenswrapper[4717]: E0218 11:54:32.681956 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.681962 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.682053 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="11c59d7a-b5c3-4c50-bf83-0c9e4a93a36d" containerName="installer" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.682066 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.682076 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="9250b3da-040d-4f0c-84d0-5d795bf3479d" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.682084 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee5d22d-8884-4563-8329-c475346f3a03" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.682093 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb729e3d-5019-4004-876e-c5d39e77e97e" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.682102 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b37906-b51d-4abf-be9c-8607a92dfa40" containerName="registry-server" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.682107 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="016f064e-8db6-41ed-a2af-2d9ea9169703" containerName="marketplace-operator" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.682977 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.685128 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.686173 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.686800 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.699377 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82l5m"] Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.779588 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-utilities\") pod \"redhat-marketplace-82l5m\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.779696 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm5jr\" (UniqueName: \"kubernetes.io/projected/b05a9e06-924f-407e-a7f8-01b14310f300-kube-api-access-zm5jr\") pod \"redhat-marketplace-82l5m\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.779731 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-catalog-content\") pod \"redhat-marketplace-82l5m\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.882924 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm5jr\" (UniqueName: \"kubernetes.io/projected/b05a9e06-924f-407e-a7f8-01b14310f300-kube-api-access-zm5jr\") pod \"redhat-marketplace-82l5m\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.882997 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-catalog-content\") pod \"redhat-marketplace-82l5m\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.883553 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-utilities\") pod \"redhat-marketplace-82l5m\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.883787 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-utilities\") pod \"redhat-marketplace-82l5m\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.883494 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-catalog-content\") pod \"redhat-marketplace-82l5m\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.889538 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-24h5b"] Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.890766 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.895500 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.915595 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24h5b"] Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.926090 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm5jr\" (UniqueName: \"kubernetes.io/projected/b05a9e06-924f-407e-a7f8-01b14310f300-kube-api-access-zm5jr\") pod \"redhat-marketplace-82l5m\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.985163 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cpvl\" (UniqueName: \"kubernetes.io/projected/9c91e056-1a1c-4444-bb0a-7557342ee962-kube-api-access-5cpvl\") pod \"redhat-operators-24h5b\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.985222 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-catalog-content\") pod \"redhat-operators-24h5b\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:32 crc kubenswrapper[4717]: I0218 11:54:32.985239 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-utilities\") pod \"redhat-operators-24h5b\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.006250 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.086199 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-catalog-content\") pod \"redhat-operators-24h5b\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.086465 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-utilities\") pod \"redhat-operators-24h5b\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.086549 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cpvl\" (UniqueName: \"kubernetes.io/projected/9c91e056-1a1c-4444-bb0a-7557342ee962-kube-api-access-5cpvl\") pod \"redhat-operators-24h5b\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.087213 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-catalog-content\") pod \"redhat-operators-24h5b\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.087528 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-utilities\") pod \"redhat-operators-24h5b\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.109846 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cpvl\" (UniqueName: \"kubernetes.io/projected/9c91e056-1a1c-4444-bb0a-7557342ee962-kube-api-access-5cpvl\") pod \"redhat-operators-24h5b\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.203494 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.412825 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82l5m"] Feb 18 11:54:33 crc kubenswrapper[4717]: W0218 11:54:33.416643 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb05a9e06_924f_407e_a7f8_01b14310f300.slice/crio-8616f73393e7d9c5d5f4e43bdaaabea9284664de5e441f7e9235cc0cbd6d51f6 WatchSource:0}: Error finding container 8616f73393e7d9c5d5f4e43bdaaabea9284664de5e441f7e9235cc0cbd6d51f6: Status 404 returned error can't find the container with id 8616f73393e7d9c5d5f4e43bdaaabea9284664de5e441f7e9235cc0cbd6d51f6 Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.543659 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:54:33 crc kubenswrapper[4717]: I0218 11:54:33.579086 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24h5b"] Feb 18 11:54:33 crc kubenswrapper[4717]: W0218 11:54:33.608112 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c91e056_1a1c_4444_bb0a_7557342ee962.slice/crio-11be05aa90af9812ac4a8cfbaffe2368f2644ab63a52e5d9d54fd69a6920f288 WatchSource:0}: Error finding container 11be05aa90af9812ac4a8cfbaffe2368f2644ab63a52e5d9d54fd69a6920f288: Status 404 returned error can't find the container with id 11be05aa90af9812ac4a8cfbaffe2368f2644ab63a52e5d9d54fd69a6920f288 Feb 18 11:54:34 crc kubenswrapper[4717]: I0218 11:54:34.211938 4717 generic.go:334] "Generic (PLEG): container finished" podID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerID="83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67" exitCode=0 Feb 18 11:54:34 crc kubenswrapper[4717]: I0218 11:54:34.211988 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24h5b" event={"ID":"9c91e056-1a1c-4444-bb0a-7557342ee962","Type":"ContainerDied","Data":"83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67"} Feb 18 11:54:34 crc kubenswrapper[4717]: I0218 11:54:34.212032 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24h5b" event={"ID":"9c91e056-1a1c-4444-bb0a-7557342ee962","Type":"ContainerStarted","Data":"11be05aa90af9812ac4a8cfbaffe2368f2644ab63a52e5d9d54fd69a6920f288"} Feb 18 11:54:34 crc kubenswrapper[4717]: I0218 11:54:34.214844 4717 generic.go:334] "Generic (PLEG): container finished" podID="b05a9e06-924f-407e-a7f8-01b14310f300" containerID="bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535" exitCode=0 Feb 18 11:54:34 crc kubenswrapper[4717]: I0218 11:54:34.214878 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82l5m" event={"ID":"b05a9e06-924f-407e-a7f8-01b14310f300","Type":"ContainerDied","Data":"bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535"} Feb 18 11:54:34 crc kubenswrapper[4717]: I0218 11:54:34.214902 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82l5m" event={"ID":"b05a9e06-924f-407e-a7f8-01b14310f300","Type":"ContainerStarted","Data":"8616f73393e7d9c5d5f4e43bdaaabea9284664de5e441f7e9235cc0cbd6d51f6"} Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.090051 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nzfwz"] Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.091644 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.094573 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.095493 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nzfwz"] Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.213566 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89193f23-0851-4c72-8fa7-bdefb5b47de9-utilities\") pod \"certified-operators-nzfwz\" (UID: \"89193f23-0851-4c72-8fa7-bdefb5b47de9\") " pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.213630 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2gz9\" (UniqueName: \"kubernetes.io/projected/89193f23-0851-4c72-8fa7-bdefb5b47de9-kube-api-access-x2gz9\") pod \"certified-operators-nzfwz\" (UID: \"89193f23-0851-4c72-8fa7-bdefb5b47de9\") " pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.213685 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89193f23-0851-4c72-8fa7-bdefb5b47de9-catalog-content\") pod \"certified-operators-nzfwz\" (UID: \"89193f23-0851-4c72-8fa7-bdefb5b47de9\") " pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.221846 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82l5m" event={"ID":"b05a9e06-924f-407e-a7f8-01b14310f300","Type":"ContainerStarted","Data":"b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003"} Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.223630 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24h5b" event={"ID":"9c91e056-1a1c-4444-bb0a-7557342ee962","Type":"ContainerStarted","Data":"ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc"} Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.273531 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rv6t6"] Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.274660 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.276744 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.281568 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rv6t6"] Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.315188 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89193f23-0851-4c72-8fa7-bdefb5b47de9-utilities\") pod \"certified-operators-nzfwz\" (UID: \"89193f23-0851-4c72-8fa7-bdefb5b47de9\") " pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.315279 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2gz9\" (UniqueName: \"kubernetes.io/projected/89193f23-0851-4c72-8fa7-bdefb5b47de9-kube-api-access-x2gz9\") pod \"certified-operators-nzfwz\" (UID: \"89193f23-0851-4c72-8fa7-bdefb5b47de9\") " pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.315328 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89193f23-0851-4c72-8fa7-bdefb5b47de9-catalog-content\") pod \"certified-operators-nzfwz\" (UID: \"89193f23-0851-4c72-8fa7-bdefb5b47de9\") " pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.315814 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89193f23-0851-4c72-8fa7-bdefb5b47de9-utilities\") pod \"certified-operators-nzfwz\" (UID: \"89193f23-0851-4c72-8fa7-bdefb5b47de9\") " pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.316095 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89193f23-0851-4c72-8fa7-bdefb5b47de9-catalog-content\") pod \"certified-operators-nzfwz\" (UID: \"89193f23-0851-4c72-8fa7-bdefb5b47de9\") " pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.337738 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2gz9\" (UniqueName: \"kubernetes.io/projected/89193f23-0851-4c72-8fa7-bdefb5b47de9-kube-api-access-x2gz9\") pod \"certified-operators-nzfwz\" (UID: \"89193f23-0851-4c72-8fa7-bdefb5b47de9\") " pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.408158 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.416332 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/253e019a-02ea-41f5-bf51-52340512ad50-catalog-content\") pod \"community-operators-rv6t6\" (UID: \"253e019a-02ea-41f5-bf51-52340512ad50\") " pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.416470 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/253e019a-02ea-41f5-bf51-52340512ad50-utilities\") pod \"community-operators-rv6t6\" (UID: \"253e019a-02ea-41f5-bf51-52340512ad50\") " pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.416526 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxmml\" (UniqueName: \"kubernetes.io/projected/253e019a-02ea-41f5-bf51-52340512ad50-kube-api-access-gxmml\") pod \"community-operators-rv6t6\" (UID: \"253e019a-02ea-41f5-bf51-52340512ad50\") " pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.517194 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxmml\" (UniqueName: \"kubernetes.io/projected/253e019a-02ea-41f5-bf51-52340512ad50-kube-api-access-gxmml\") pod \"community-operators-rv6t6\" (UID: \"253e019a-02ea-41f5-bf51-52340512ad50\") " pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.517580 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/253e019a-02ea-41f5-bf51-52340512ad50-catalog-content\") pod \"community-operators-rv6t6\" (UID: \"253e019a-02ea-41f5-bf51-52340512ad50\") " pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.517714 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/253e019a-02ea-41f5-bf51-52340512ad50-utilities\") pod \"community-operators-rv6t6\" (UID: \"253e019a-02ea-41f5-bf51-52340512ad50\") " pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.518224 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/253e019a-02ea-41f5-bf51-52340512ad50-catalog-content\") pod \"community-operators-rv6t6\" (UID: \"253e019a-02ea-41f5-bf51-52340512ad50\") " pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.518250 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/253e019a-02ea-41f5-bf51-52340512ad50-utilities\") pod \"community-operators-rv6t6\" (UID: \"253e019a-02ea-41f5-bf51-52340512ad50\") " pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.547856 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxmml\" (UniqueName: \"kubernetes.io/projected/253e019a-02ea-41f5-bf51-52340512ad50-kube-api-access-gxmml\") pod \"community-operators-rv6t6\" (UID: \"253e019a-02ea-41f5-bf51-52340512ad50\") " pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.615822 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.860474 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nzfwz"] Feb 18 11:54:35 crc kubenswrapper[4717]: I0218 11:54:35.993598 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rv6t6"] Feb 18 11:54:36 crc kubenswrapper[4717]: W0218 11:54:36.068587 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod253e019a_02ea_41f5_bf51_52340512ad50.slice/crio-ad00776c4ce191f60768fd1843628c53c3c9249c13194fa9755731f4ed934abd WatchSource:0}: Error finding container ad00776c4ce191f60768fd1843628c53c3c9249c13194fa9755731f4ed934abd: Status 404 returned error can't find the container with id ad00776c4ce191f60768fd1843628c53c3c9249c13194fa9755731f4ed934abd Feb 18 11:54:36 crc kubenswrapper[4717]: I0218 11:54:36.230947 4717 generic.go:334] "Generic (PLEG): container finished" podID="89193f23-0851-4c72-8fa7-bdefb5b47de9" containerID="e92dd2008d4296505bef9b2d76b4cb70757cffcc9c878b9941823c219adbab30" exitCode=0 Feb 18 11:54:36 crc kubenswrapper[4717]: I0218 11:54:36.231016 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzfwz" event={"ID":"89193f23-0851-4c72-8fa7-bdefb5b47de9","Type":"ContainerDied","Data":"e92dd2008d4296505bef9b2d76b4cb70757cffcc9c878b9941823c219adbab30"} Feb 18 11:54:36 crc kubenswrapper[4717]: I0218 11:54:36.231044 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzfwz" event={"ID":"89193f23-0851-4c72-8fa7-bdefb5b47de9","Type":"ContainerStarted","Data":"ff00c4139d1d94ea204b01e547cc023aaef5289689b3de03eecf2259851e2ad9"} Feb 18 11:54:36 crc kubenswrapper[4717]: I0218 11:54:36.232968 4717 generic.go:334] "Generic (PLEG): container finished" podID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerID="ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc" exitCode=0 Feb 18 11:54:36 crc kubenswrapper[4717]: I0218 11:54:36.233230 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24h5b" event={"ID":"9c91e056-1a1c-4444-bb0a-7557342ee962","Type":"ContainerDied","Data":"ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc"} Feb 18 11:54:36 crc kubenswrapper[4717]: I0218 11:54:36.245526 4717 generic.go:334] "Generic (PLEG): container finished" podID="253e019a-02ea-41f5-bf51-52340512ad50" containerID="9da38c5a783d3702a565cac1f1c3439cfc2e70fec37fcf16e3b1698e8c7a232d" exitCode=0 Feb 18 11:54:36 crc kubenswrapper[4717]: I0218 11:54:36.245762 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv6t6" event={"ID":"253e019a-02ea-41f5-bf51-52340512ad50","Type":"ContainerDied","Data":"9da38c5a783d3702a565cac1f1c3439cfc2e70fec37fcf16e3b1698e8c7a232d"} Feb 18 11:54:36 crc kubenswrapper[4717]: I0218 11:54:36.245827 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv6t6" event={"ID":"253e019a-02ea-41f5-bf51-52340512ad50","Type":"ContainerStarted","Data":"ad00776c4ce191f60768fd1843628c53c3c9249c13194fa9755731f4ed934abd"} Feb 18 11:54:36 crc kubenswrapper[4717]: I0218 11:54:36.248029 4717 generic.go:334] "Generic (PLEG): container finished" podID="b05a9e06-924f-407e-a7f8-01b14310f300" containerID="b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003" exitCode=0 Feb 18 11:54:36 crc kubenswrapper[4717]: I0218 11:54:36.248056 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82l5m" event={"ID":"b05a9e06-924f-407e-a7f8-01b14310f300","Type":"ContainerDied","Data":"b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003"} Feb 18 11:54:37 crc kubenswrapper[4717]: I0218 11:54:37.256152 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24h5b" event={"ID":"9c91e056-1a1c-4444-bb0a-7557342ee962","Type":"ContainerStarted","Data":"3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279"} Feb 18 11:54:37 crc kubenswrapper[4717]: I0218 11:54:37.260164 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv6t6" event={"ID":"253e019a-02ea-41f5-bf51-52340512ad50","Type":"ContainerStarted","Data":"19093c9f866ac20d4278aa80eb07a96f87652872a92c026064e91f1fd71311dc"} Feb 18 11:54:37 crc kubenswrapper[4717]: I0218 11:54:37.264374 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82l5m" event={"ID":"b05a9e06-924f-407e-a7f8-01b14310f300","Type":"ContainerStarted","Data":"de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53"} Feb 18 11:54:37 crc kubenswrapper[4717]: I0218 11:54:37.266812 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzfwz" event={"ID":"89193f23-0851-4c72-8fa7-bdefb5b47de9","Type":"ContainerStarted","Data":"d46d1d776a2db9335ac6a250cf167828fc9adaab9a961a4befbd298c3a2b053c"} Feb 18 11:54:37 crc kubenswrapper[4717]: I0218 11:54:37.281931 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-24h5b" podStartSLOduration=2.806301692 podStartE2EDuration="5.281914238s" podCreationTimestamp="2026-02-18 11:54:32 +0000 UTC" firstStartedPulling="2026-02-18 11:54:34.213407416 +0000 UTC m=+308.615508732" lastFinishedPulling="2026-02-18 11:54:36.689019972 +0000 UTC m=+311.091121278" observedRunningTime="2026-02-18 11:54:37.277496709 +0000 UTC m=+311.679598035" watchObservedRunningTime="2026-02-18 11:54:37.281914238 +0000 UTC m=+311.684015554" Feb 18 11:54:37 crc kubenswrapper[4717]: I0218 11:54:37.344746 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-82l5m" podStartSLOduration=2.948119901 podStartE2EDuration="5.344722716s" podCreationTimestamp="2026-02-18 11:54:32 +0000 UTC" firstStartedPulling="2026-02-18 11:54:34.216025603 +0000 UTC m=+308.618126919" lastFinishedPulling="2026-02-18 11:54:36.612628418 +0000 UTC m=+311.014729734" observedRunningTime="2026-02-18 11:54:37.3414483 +0000 UTC m=+311.743549616" watchObservedRunningTime="2026-02-18 11:54:37.344722716 +0000 UTC m=+311.746824032" Feb 18 11:54:38 crc kubenswrapper[4717]: I0218 11:54:38.277563 4717 generic.go:334] "Generic (PLEG): container finished" podID="253e019a-02ea-41f5-bf51-52340512ad50" containerID="19093c9f866ac20d4278aa80eb07a96f87652872a92c026064e91f1fd71311dc" exitCode=0 Feb 18 11:54:38 crc kubenswrapper[4717]: I0218 11:54:38.277670 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv6t6" event={"ID":"253e019a-02ea-41f5-bf51-52340512ad50","Type":"ContainerDied","Data":"19093c9f866ac20d4278aa80eb07a96f87652872a92c026064e91f1fd71311dc"} Feb 18 11:54:38 crc kubenswrapper[4717]: I0218 11:54:38.283340 4717 generic.go:334] "Generic (PLEG): container finished" podID="89193f23-0851-4c72-8fa7-bdefb5b47de9" containerID="d46d1d776a2db9335ac6a250cf167828fc9adaab9a961a4befbd298c3a2b053c" exitCode=0 Feb 18 11:54:38 crc kubenswrapper[4717]: I0218 11:54:38.284011 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzfwz" event={"ID":"89193f23-0851-4c72-8fa7-bdefb5b47de9","Type":"ContainerDied","Data":"d46d1d776a2db9335ac6a250cf167828fc9adaab9a961a4befbd298c3a2b053c"} Feb 18 11:54:38 crc kubenswrapper[4717]: I0218 11:54:38.909853 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:54:38 crc kubenswrapper[4717]: I0218 11:54:38.913474 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:54:39 crc kubenswrapper[4717]: I0218 11:54:39.292346 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rv6t6" event={"ID":"253e019a-02ea-41f5-bf51-52340512ad50","Type":"ContainerStarted","Data":"f2f1cdc6cec99ab8fed19611f4eefbf660622de0225bdc8fc2739e11bd675d9f"} Feb 18 11:54:39 crc kubenswrapper[4717]: I0218 11:54:39.295471 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nzfwz" event={"ID":"89193f23-0851-4c72-8fa7-bdefb5b47de9","Type":"ContainerStarted","Data":"60bf01cf11a448b92562da109817cbd79ea3400d659c7d0919a99d5e962e3bb6"} Feb 18 11:54:39 crc kubenswrapper[4717]: I0218 11:54:39.301141 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 11:54:39 crc kubenswrapper[4717]: I0218 11:54:39.325135 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rv6t6" podStartSLOduration=1.9256303460000002 podStartE2EDuration="4.325113134s" podCreationTimestamp="2026-02-18 11:54:35 +0000 UTC" firstStartedPulling="2026-02-18 11:54:36.246831467 +0000 UTC m=+310.648932783" lastFinishedPulling="2026-02-18 11:54:38.646314255 +0000 UTC m=+313.048415571" observedRunningTime="2026-02-18 11:54:39.319562622 +0000 UTC m=+313.721663938" watchObservedRunningTime="2026-02-18 11:54:39.325113134 +0000 UTC m=+313.727214470" Feb 18 11:54:39 crc kubenswrapper[4717]: I0218 11:54:39.363297 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nzfwz" podStartSLOduration=1.88883459 podStartE2EDuration="4.363276161s" podCreationTimestamp="2026-02-18 11:54:35 +0000 UTC" firstStartedPulling="2026-02-18 11:54:36.23223975 +0000 UTC m=+310.634341056" lastFinishedPulling="2026-02-18 11:54:38.706681311 +0000 UTC m=+313.108782627" observedRunningTime="2026-02-18 11:54:39.343803711 +0000 UTC m=+313.745905027" watchObservedRunningTime="2026-02-18 11:54:39.363276161 +0000 UTC m=+313.765377477" Feb 18 11:54:43 crc kubenswrapper[4717]: I0218 11:54:43.006571 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:43 crc kubenswrapper[4717]: I0218 11:54:43.006956 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:43 crc kubenswrapper[4717]: I0218 11:54:43.050109 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:43 crc kubenswrapper[4717]: I0218 11:54:43.204129 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:43 crc kubenswrapper[4717]: I0218 11:54:43.204188 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:43 crc kubenswrapper[4717]: I0218 11:54:43.242672 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:43 crc kubenswrapper[4717]: I0218 11:54:43.388191 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 11:54:43 crc kubenswrapper[4717]: I0218 11:54:43.393005 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 11:54:45 crc kubenswrapper[4717]: I0218 11:54:45.408649 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:45 crc kubenswrapper[4717]: I0218 11:54:45.409226 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:45 crc kubenswrapper[4717]: I0218 11:54:45.448458 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:45 crc kubenswrapper[4717]: I0218 11:54:45.616454 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:45 crc kubenswrapper[4717]: I0218 11:54:45.616570 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:45 crc kubenswrapper[4717]: I0218 11:54:45.652829 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:46 crc kubenswrapper[4717]: I0218 11:54:46.396373 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nzfwz" Feb 18 11:54:46 crc kubenswrapper[4717]: I0218 11:54:46.396748 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rv6t6" Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.658412 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h4tq9"] Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.659362 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:48 crc kubenswrapper[4717]: W0218 11:54:48.661915 4717 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": failed to list *v1.Secret: secrets "marketplace-operator-dockercfg-5nsgg" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Feb 18 11:54:48 crc kubenswrapper[4717]: E0218 11:54:48.661989 4717 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-5nsgg\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"marketplace-operator-dockercfg-5nsgg\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 11:54:48 crc kubenswrapper[4717]: W0218 11:54:48.664301 4717 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: secrets "marketplace-operator-metrics" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Feb 18 11:54:48 crc kubenswrapper[4717]: E0218 11:54:48.664356 4717 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"marketplace-operator-metrics\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.672537 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h4tq9"] Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.681069 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.789618 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27aacb4e-b587-400b-a73b-d7d27d3e2bb6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h4tq9\" (UID: \"27aacb4e-b587-400b-a73b-d7d27d3e2bb6\") " pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.789705 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqff\" (UniqueName: \"kubernetes.io/projected/27aacb4e-b587-400b-a73b-d7d27d3e2bb6-kube-api-access-9gqff\") pod \"marketplace-operator-79b997595-h4tq9\" (UID: \"27aacb4e-b587-400b-a73b-d7d27d3e2bb6\") " pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.789761 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27aacb4e-b587-400b-a73b-d7d27d3e2bb6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h4tq9\" (UID: \"27aacb4e-b587-400b-a73b-d7d27d3e2bb6\") " pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.890979 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27aacb4e-b587-400b-a73b-d7d27d3e2bb6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h4tq9\" (UID: \"27aacb4e-b587-400b-a73b-d7d27d3e2bb6\") " pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.891118 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gqff\" (UniqueName: \"kubernetes.io/projected/27aacb4e-b587-400b-a73b-d7d27d3e2bb6-kube-api-access-9gqff\") pod \"marketplace-operator-79b997595-h4tq9\" (UID: \"27aacb4e-b587-400b-a73b-d7d27d3e2bb6\") " pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.891160 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27aacb4e-b587-400b-a73b-d7d27d3e2bb6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h4tq9\" (UID: \"27aacb4e-b587-400b-a73b-d7d27d3e2bb6\") " pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.892943 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27aacb4e-b587-400b-a73b-d7d27d3e2bb6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h4tq9\" (UID: \"27aacb4e-b587-400b-a73b-d7d27d3e2bb6\") " pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:48 crc kubenswrapper[4717]: I0218 11:54:48.916896 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gqff\" (UniqueName: \"kubernetes.io/projected/27aacb4e-b587-400b-a73b-d7d27d3e2bb6-kube-api-access-9gqff\") pod \"marketplace-operator-79b997595-h4tq9\" (UID: \"27aacb4e-b587-400b-a73b-d7d27d3e2bb6\") " pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:49 crc kubenswrapper[4717]: I0218 11:54:49.664789 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 11:54:49 crc kubenswrapper[4717]: I0218 11:54:49.680915 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27aacb4e-b587-400b-a73b-d7d27d3e2bb6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h4tq9\" (UID: \"27aacb4e-b587-400b-a73b-d7d27d3e2bb6\") " pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:50 crc kubenswrapper[4717]: I0218 11:54:50.132589 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 11:54:50 crc kubenswrapper[4717]: I0218 11:54:50.139109 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:50 crc kubenswrapper[4717]: I0218 11:54:50.631502 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h4tq9"] Feb 18 11:54:51 crc kubenswrapper[4717]: I0218 11:54:51.393365 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" event={"ID":"27aacb4e-b587-400b-a73b-d7d27d3e2bb6","Type":"ContainerStarted","Data":"5e57c1d7a0a80da83a61e36d0228958e5aa8d6cc7eaccafb2807656c8c3169c4"} Feb 18 11:54:51 crc kubenswrapper[4717]: I0218 11:54:51.395322 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:54:51 crc kubenswrapper[4717]: I0218 11:54:51.395417 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" event={"ID":"27aacb4e-b587-400b-a73b-d7d27d3e2bb6","Type":"ContainerStarted","Data":"bd83641924a32d18c6e7c941581cd433c3a88005a0ee3afa21f73db6b401d064"} Feb 18 11:54:51 crc kubenswrapper[4717]: I0218 11:54:51.396097 4717 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h4tq9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" start-of-body= Feb 18 11:54:51 crc kubenswrapper[4717]: I0218 11:54:51.396166 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" podUID="27aacb4e-b587-400b-a73b-d7d27d3e2bb6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" Feb 18 11:54:51 crc kubenswrapper[4717]: I0218 11:54:51.413879 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" podStartSLOduration=3.413857981 podStartE2EDuration="3.413857981s" podCreationTimestamp="2026-02-18 11:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:54:51.410007518 +0000 UTC m=+325.812108824" watchObservedRunningTime="2026-02-18 11:54:51.413857981 +0000 UTC m=+325.815959297" Feb 18 11:54:52 crc kubenswrapper[4717]: I0218 11:54:52.402108 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-h4tq9" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.469446 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pxcbg"] Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.472010 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.492037 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pxcbg"] Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.603151 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjtdl\" (UniqueName: \"kubernetes.io/projected/4809c94e-d725-43c1-bd1a-7c36f3058aab-kube-api-access-kjtdl\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.603209 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.603235 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4809c94e-d725-43c1-bd1a-7c36f3058aab-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.603274 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4809c94e-d725-43c1-bd1a-7c36f3058aab-registry-tls\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.603331 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4809c94e-d725-43c1-bd1a-7c36f3058aab-bound-sa-token\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.603412 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4809c94e-d725-43c1-bd1a-7c36f3058aab-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.603445 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4809c94e-d725-43c1-bd1a-7c36f3058aab-registry-certificates\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.603471 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4809c94e-d725-43c1-bd1a-7c36f3058aab-trusted-ca\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.639811 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.705049 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4809c94e-d725-43c1-bd1a-7c36f3058aab-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.705123 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4809c94e-d725-43c1-bd1a-7c36f3058aab-registry-certificates\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.705151 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4809c94e-d725-43c1-bd1a-7c36f3058aab-trusted-ca\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.705173 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjtdl\" (UniqueName: \"kubernetes.io/projected/4809c94e-d725-43c1-bd1a-7c36f3058aab-kube-api-access-kjtdl\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.705197 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4809c94e-d725-43c1-bd1a-7c36f3058aab-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.705219 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4809c94e-d725-43c1-bd1a-7c36f3058aab-registry-tls\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.705242 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4809c94e-d725-43c1-bd1a-7c36f3058aab-bound-sa-token\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.706073 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4809c94e-d725-43c1-bd1a-7c36f3058aab-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.706600 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4809c94e-d725-43c1-bd1a-7c36f3058aab-registry-certificates\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.706929 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4809c94e-d725-43c1-bd1a-7c36f3058aab-trusted-ca\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.713062 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4809c94e-d725-43c1-bd1a-7c36f3058aab-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.713585 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4809c94e-d725-43c1-bd1a-7c36f3058aab-registry-tls\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.726457 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjtdl\" (UniqueName: \"kubernetes.io/projected/4809c94e-d725-43c1-bd1a-7c36f3058aab-kube-api-access-kjtdl\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.726858 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4809c94e-d725-43c1-bd1a-7c36f3058aab-bound-sa-token\") pod \"image-registry-66df7c8f76-pxcbg\" (UID: \"4809c94e-d725-43c1-bd1a-7c36f3058aab\") " pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:33 crc kubenswrapper[4717]: I0218 11:55:33.790546 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:34 crc kubenswrapper[4717]: I0218 11:55:34.034311 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pxcbg"] Feb 18 11:55:34 crc kubenswrapper[4717]: I0218 11:55:34.869716 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" event={"ID":"4809c94e-d725-43c1-bd1a-7c36f3058aab","Type":"ContainerStarted","Data":"56e3889f85a00e0293aaa8e6a73d35871921ae43b7b2783771540b94310e138b"} Feb 18 11:55:34 crc kubenswrapper[4717]: I0218 11:55:34.870187 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" event={"ID":"4809c94e-d725-43c1-bd1a-7c36f3058aab","Type":"ContainerStarted","Data":"4b9e16a146e5e61c0ea407878eed09f8795988a2cb7bfbb1a2232b95e1fe5353"} Feb 18 11:55:34 crc kubenswrapper[4717]: I0218 11:55:34.870215 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:34 crc kubenswrapper[4717]: I0218 11:55:34.893541 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" podStartSLOduration=1.893513235 podStartE2EDuration="1.893513235s" podCreationTimestamp="2026-02-18 11:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:55:34.893434833 +0000 UTC m=+369.295536149" watchObservedRunningTime="2026-02-18 11:55:34.893513235 +0000 UTC m=+369.295614561" Feb 18 11:55:42 crc kubenswrapper[4717]: I0218 11:55:42.773387 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:55:42 crc kubenswrapper[4717]: I0218 11:55:42.774114 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:55:53 crc kubenswrapper[4717]: I0218 11:55:53.798992 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-pxcbg" Feb 18 11:55:53 crc kubenswrapper[4717]: I0218 11:55:53.853982 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bxl4l"] Feb 18 11:56:12 crc kubenswrapper[4717]: I0218 11:56:12.773016 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:56:12 crc kubenswrapper[4717]: I0218 11:56:12.773805 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:56:18 crc kubenswrapper[4717]: I0218 11:56:18.894035 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" podUID="d9b3201b-b5e0-4a95-9015-97309eb9957e" containerName="registry" containerID="cri-o://fd6149742754938f5c7b76833a7bb33360201d387c6fa660ca140a325908dab1" gracePeriod=30 Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.134810 4717 generic.go:334] "Generic (PLEG): container finished" podID="d9b3201b-b5e0-4a95-9015-97309eb9957e" containerID="fd6149742754938f5c7b76833a7bb33360201d387c6fa660ca140a325908dab1" exitCode=0 Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.134863 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" event={"ID":"d9b3201b-b5e0-4a95-9015-97309eb9957e","Type":"ContainerDied","Data":"fd6149742754938f5c7b76833a7bb33360201d387c6fa660ca140a325908dab1"} Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.255577 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.320702 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-certificates\") pod \"d9b3201b-b5e0-4a95-9015-97309eb9957e\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.320800 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9b3201b-b5e0-4a95-9015-97309eb9957e-ca-trust-extracted\") pod \"d9b3201b-b5e0-4a95-9015-97309eb9957e\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.320905 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-tls\") pod \"d9b3201b-b5e0-4a95-9015-97309eb9957e\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.320951 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5n5g\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-kube-api-access-x5n5g\") pod \"d9b3201b-b5e0-4a95-9015-97309eb9957e\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.321003 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9b3201b-b5e0-4a95-9015-97309eb9957e-installation-pull-secrets\") pod \"d9b3201b-b5e0-4a95-9015-97309eb9957e\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.321249 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d9b3201b-b5e0-4a95-9015-97309eb9957e\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.321347 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-bound-sa-token\") pod \"d9b3201b-b5e0-4a95-9015-97309eb9957e\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.321398 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-trusted-ca\") pod \"d9b3201b-b5e0-4a95-9015-97309eb9957e\" (UID: \"d9b3201b-b5e0-4a95-9015-97309eb9957e\") " Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.322379 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d9b3201b-b5e0-4a95-9015-97309eb9957e" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.322477 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d9b3201b-b5e0-4a95-9015-97309eb9957e" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.327903 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d9b3201b-b5e0-4a95-9015-97309eb9957e" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.328436 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-kube-api-access-x5n5g" (OuterVolumeSpecName: "kube-api-access-x5n5g") pod "d9b3201b-b5e0-4a95-9015-97309eb9957e" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e"). InnerVolumeSpecName "kube-api-access-x5n5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.329165 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d9b3201b-b5e0-4a95-9015-97309eb9957e" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.331670 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b3201b-b5e0-4a95-9015-97309eb9957e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d9b3201b-b5e0-4a95-9015-97309eb9957e" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.341042 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9b3201b-b5e0-4a95-9015-97309eb9957e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d9b3201b-b5e0-4a95-9015-97309eb9957e" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.342125 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d9b3201b-b5e0-4a95-9015-97309eb9957e" (UID: "d9b3201b-b5e0-4a95-9015-97309eb9957e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.423716 4717 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.424251 4717 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.424354 4717 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.424434 4717 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9b3201b-b5e0-4a95-9015-97309eb9957e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.424511 4717 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.424583 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5n5g\" (UniqueName: \"kubernetes.io/projected/d9b3201b-b5e0-4a95-9015-97309eb9957e-kube-api-access-x5n5g\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:19 crc kubenswrapper[4717]: I0218 11:56:19.424652 4717 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9b3201b-b5e0-4a95-9015-97309eb9957e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 11:56:20 crc kubenswrapper[4717]: I0218 11:56:20.500211 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" event={"ID":"d9b3201b-b5e0-4a95-9015-97309eb9957e","Type":"ContainerDied","Data":"1027d3c50c33bb3d95a7da497331e210fecdd677d5c24adb93e25b4dc411d484"} Feb 18 11:56:20 crc kubenswrapper[4717]: I0218 11:56:20.500295 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bxl4l" Feb 18 11:56:20 crc kubenswrapper[4717]: I0218 11:56:20.500340 4717 scope.go:117] "RemoveContainer" containerID="fd6149742754938f5c7b76833a7bb33360201d387c6fa660ca140a325908dab1" Feb 18 11:56:20 crc kubenswrapper[4717]: I0218 11:56:20.535758 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bxl4l"] Feb 18 11:56:20 crc kubenswrapper[4717]: I0218 11:56:20.539087 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bxl4l"] Feb 18 11:56:21 crc kubenswrapper[4717]: I0218 11:56:21.044675 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9b3201b-b5e0-4a95-9015-97309eb9957e" path="/var/lib/kubelet/pods/d9b3201b-b5e0-4a95-9015-97309eb9957e/volumes" Feb 18 11:56:42 crc kubenswrapper[4717]: I0218 11:56:42.773506 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:56:42 crc kubenswrapper[4717]: I0218 11:56:42.774087 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:56:42 crc kubenswrapper[4717]: I0218 11:56:42.774134 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 11:56:42 crc kubenswrapper[4717]: I0218 11:56:42.774759 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94e1dd922b2fb688b0271b6031b400dcd65ef629725f95285a8e4f6c5fddd404"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 11:56:42 crc kubenswrapper[4717]: I0218 11:56:42.774809 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://94e1dd922b2fb688b0271b6031b400dcd65ef629725f95285a8e4f6c5fddd404" gracePeriod=600 Feb 18 11:56:43 crc kubenswrapper[4717]: I0218 11:56:43.643118 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="94e1dd922b2fb688b0271b6031b400dcd65ef629725f95285a8e4f6c5fddd404" exitCode=0 Feb 18 11:56:43 crc kubenswrapper[4717]: I0218 11:56:43.643212 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"94e1dd922b2fb688b0271b6031b400dcd65ef629725f95285a8e4f6c5fddd404"} Feb 18 11:56:43 crc kubenswrapper[4717]: I0218 11:56:43.643588 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"dec67bbab9750d74b40d9a5adb8456536898db0fb4f915a1c596f22068aed83e"} Feb 18 11:56:43 crc kubenswrapper[4717]: I0218 11:56:43.643619 4717 scope.go:117] "RemoveContainer" containerID="440fe7df903dc2039187619eef3685145f9d6d2d571981a0f8e3ff6cc5a9f727" Feb 18 11:59:12 crc kubenswrapper[4717]: I0218 11:59:12.773706 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:59:12 crc kubenswrapper[4717]: I0218 11:59:12.774551 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.352412 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qx54w"] Feb 18 11:59:27 crc kubenswrapper[4717]: E0218 11:59:27.353643 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b3201b-b5e0-4a95-9015-97309eb9957e" containerName="registry" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.353664 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b3201b-b5e0-4a95-9015-97309eb9957e" containerName="registry" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.353823 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9b3201b-b5e0-4a95-9015-97309eb9957e" containerName="registry" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.354486 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qx54w" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.359953 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.359960 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.363201 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mr9jc"] Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.364318 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mr9jc" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.367433 4717 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-q5x6x" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.368253 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8dvlf"] Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.368638 4717 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-f9qrx" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.369167 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8dvlf" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.376427 4717 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-v5nxm" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.378458 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qx54w"] Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.383420 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8dvlf"] Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.387455 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chzkl\" (UniqueName: \"kubernetes.io/projected/efe0486e-8153-4083-aedf-15085839219b-kube-api-access-chzkl\") pod \"cert-manager-webhook-687f57d79b-8dvlf\" (UID: \"efe0486e-8153-4083-aedf-15085839219b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8dvlf" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.387596 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfgkm\" (UniqueName: \"kubernetes.io/projected/d10e332f-4255-4315-bf68-1b479919ed9c-kube-api-access-hfgkm\") pod \"cert-manager-858654f9db-mr9jc\" (UID: \"d10e332f-4255-4315-bf68-1b479919ed9c\") " pod="cert-manager/cert-manager-858654f9db-mr9jc" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.387663 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vwhr\" (UniqueName: \"kubernetes.io/projected/70bc2303-bab2-48bc-a4a3-4c19b86571aa-kube-api-access-5vwhr\") pod \"cert-manager-cainjector-cf98fcc89-qx54w\" (UID: \"70bc2303-bab2-48bc-a4a3-4c19b86571aa\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qx54w" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.442117 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mr9jc"] Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.489183 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfgkm\" (UniqueName: \"kubernetes.io/projected/d10e332f-4255-4315-bf68-1b479919ed9c-kube-api-access-hfgkm\") pod \"cert-manager-858654f9db-mr9jc\" (UID: \"d10e332f-4255-4315-bf68-1b479919ed9c\") " pod="cert-manager/cert-manager-858654f9db-mr9jc" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.489236 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vwhr\" (UniqueName: \"kubernetes.io/projected/70bc2303-bab2-48bc-a4a3-4c19b86571aa-kube-api-access-5vwhr\") pod \"cert-manager-cainjector-cf98fcc89-qx54w\" (UID: \"70bc2303-bab2-48bc-a4a3-4c19b86571aa\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qx54w" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.489281 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chzkl\" (UniqueName: \"kubernetes.io/projected/efe0486e-8153-4083-aedf-15085839219b-kube-api-access-chzkl\") pod \"cert-manager-webhook-687f57d79b-8dvlf\" (UID: \"efe0486e-8153-4083-aedf-15085839219b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8dvlf" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.510102 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chzkl\" (UniqueName: \"kubernetes.io/projected/efe0486e-8153-4083-aedf-15085839219b-kube-api-access-chzkl\") pod \"cert-manager-webhook-687f57d79b-8dvlf\" (UID: \"efe0486e-8153-4083-aedf-15085839219b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8dvlf" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.510767 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vwhr\" (UniqueName: \"kubernetes.io/projected/70bc2303-bab2-48bc-a4a3-4c19b86571aa-kube-api-access-5vwhr\") pod \"cert-manager-cainjector-cf98fcc89-qx54w\" (UID: \"70bc2303-bab2-48bc-a4a3-4c19b86571aa\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qx54w" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.517081 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfgkm\" (UniqueName: \"kubernetes.io/projected/d10e332f-4255-4315-bf68-1b479919ed9c-kube-api-access-hfgkm\") pod \"cert-manager-858654f9db-mr9jc\" (UID: \"d10e332f-4255-4315-bf68-1b479919ed9c\") " pod="cert-manager/cert-manager-858654f9db-mr9jc" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.708013 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qx54w" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.722625 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mr9jc" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.746398 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8dvlf" Feb 18 11:59:27 crc kubenswrapper[4717]: I0218 11:59:27.990738 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qx54w"] Feb 18 11:59:28 crc kubenswrapper[4717]: I0218 11:59:28.001568 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 11:59:28 crc kubenswrapper[4717]: I0218 11:59:28.237170 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mr9jc"] Feb 18 11:59:28 crc kubenswrapper[4717]: W0218 11:59:28.239626 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd10e332f_4255_4315_bf68_1b479919ed9c.slice/crio-cbb54308f1ae0cb69516c0593c42f5becbe9de0a242f4c8163dc8afe8f05dd51 WatchSource:0}: Error finding container cbb54308f1ae0cb69516c0593c42f5becbe9de0a242f4c8163dc8afe8f05dd51: Status 404 returned error can't find the container with id cbb54308f1ae0cb69516c0593c42f5becbe9de0a242f4c8163dc8afe8f05dd51 Feb 18 11:59:28 crc kubenswrapper[4717]: W0218 11:59:28.252466 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefe0486e_8153_4083_aedf_15085839219b.slice/crio-caf10a2e6784ab1b50ad4f418c2b34f4ced147a3fa7e410304f058cf1f6eed5d WatchSource:0}: Error finding container caf10a2e6784ab1b50ad4f418c2b34f4ced147a3fa7e410304f058cf1f6eed5d: Status 404 returned error can't find the container with id caf10a2e6784ab1b50ad4f418c2b34f4ced147a3fa7e410304f058cf1f6eed5d Feb 18 11:59:28 crc kubenswrapper[4717]: I0218 11:59:28.252888 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8dvlf"] Feb 18 11:59:28 crc kubenswrapper[4717]: I0218 11:59:28.540184 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8dvlf" event={"ID":"efe0486e-8153-4083-aedf-15085839219b","Type":"ContainerStarted","Data":"caf10a2e6784ab1b50ad4f418c2b34f4ced147a3fa7e410304f058cf1f6eed5d"} Feb 18 11:59:28 crc kubenswrapper[4717]: I0218 11:59:28.542246 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qx54w" event={"ID":"70bc2303-bab2-48bc-a4a3-4c19b86571aa","Type":"ContainerStarted","Data":"8ddff3524fbc8586815a49667d8faf36acf33778a5e669a1fb52eafc1071fc03"} Feb 18 11:59:28 crc kubenswrapper[4717]: I0218 11:59:28.543301 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mr9jc" event={"ID":"d10e332f-4255-4315-bf68-1b479919ed9c","Type":"ContainerStarted","Data":"cbb54308f1ae0cb69516c0593c42f5becbe9de0a242f4c8163dc8afe8f05dd51"} Feb 18 11:59:34 crc kubenswrapper[4717]: I0218 11:59:34.577533 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qx54w" event={"ID":"70bc2303-bab2-48bc-a4a3-4c19b86571aa","Type":"ContainerStarted","Data":"fc0c0b239dc156ef09ac1ebfb01228a78817795ff87cabf39276a9a119b80d2f"} Feb 18 11:59:34 crc kubenswrapper[4717]: I0218 11:59:34.601245 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qx54w" podStartSLOduration=1.6767970700000001 podStartE2EDuration="7.601214355s" podCreationTimestamp="2026-02-18 11:59:27 +0000 UTC" firstStartedPulling="2026-02-18 11:59:28.001368091 +0000 UTC m=+602.403469407" lastFinishedPulling="2026-02-18 11:59:33.925785376 +0000 UTC m=+608.327886692" observedRunningTime="2026-02-18 11:59:34.593935342 +0000 UTC m=+608.996036678" watchObservedRunningTime="2026-02-18 11:59:34.601214355 +0000 UTC m=+609.003315691" Feb 18 11:59:35 crc kubenswrapper[4717]: I0218 11:59:35.585796 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mr9jc" event={"ID":"d10e332f-4255-4315-bf68-1b479919ed9c","Type":"ContainerStarted","Data":"c3bf8cad1ab8fbbe63f0b9d5705c35c04d4a8623b10f9339de4705cdfe0c8af9"} Feb 18 11:59:35 crc kubenswrapper[4717]: I0218 11:59:35.605493 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mr9jc" podStartSLOduration=1.687776443 podStartE2EDuration="8.605454469s" podCreationTimestamp="2026-02-18 11:59:27 +0000 UTC" firstStartedPulling="2026-02-18 11:59:28.245402538 +0000 UTC m=+602.647503854" lastFinishedPulling="2026-02-18 11:59:35.163080564 +0000 UTC m=+609.565181880" observedRunningTime="2026-02-18 11:59:35.60176169 +0000 UTC m=+610.003863026" watchObservedRunningTime="2026-02-18 11:59:35.605454469 +0000 UTC m=+610.007555785" Feb 18 11:59:36 crc kubenswrapper[4717]: I0218 11:59:36.593763 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8dvlf" event={"ID":"efe0486e-8153-4083-aedf-15085839219b","Type":"ContainerStarted","Data":"cebe96336727ba2b23f64ff65dcd10a9fbcc44e0f7a10c5abe3dc0f72954ee81"} Feb 18 11:59:36 crc kubenswrapper[4717]: I0218 11:59:36.613099 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-8dvlf" podStartSLOduration=2.316986176 podStartE2EDuration="9.613079691s" podCreationTimestamp="2026-02-18 11:59:27 +0000 UTC" firstStartedPulling="2026-02-18 11:59:28.254062212 +0000 UTC m=+602.656163528" lastFinishedPulling="2026-02-18 11:59:35.550155727 +0000 UTC m=+609.952257043" observedRunningTime="2026-02-18 11:59:36.610087244 +0000 UTC m=+611.012188580" watchObservedRunningTime="2026-02-18 11:59:36.613079691 +0000 UTC m=+611.015180997" Feb 18 11:59:36 crc kubenswrapper[4717]: I0218 11:59:36.982167 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2fh5s"] Feb 18 11:59:36 crc kubenswrapper[4717]: I0218 11:59:36.983310 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovn-controller" containerID="cri-o://457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df" gracePeriod=30 Feb 18 11:59:36 crc kubenswrapper[4717]: I0218 11:59:36.983372 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="nbdb" containerID="cri-o://81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" gracePeriod=30 Feb 18 11:59:36 crc kubenswrapper[4717]: I0218 11:59:36.983463 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="kube-rbac-proxy-node" containerID="cri-o://431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c" gracePeriod=30 Feb 18 11:59:36 crc kubenswrapper[4717]: I0218 11:59:36.983464 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="sbdb" containerID="cri-o://2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" gracePeriod=30 Feb 18 11:59:36 crc kubenswrapper[4717]: I0218 11:59:36.983487 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="northd" containerID="cri-o://816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102" gracePeriod=30 Feb 18 11:59:36 crc kubenswrapper[4717]: I0218 11:59:36.983471 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40" gracePeriod=30 Feb 18 11:59:36 crc kubenswrapper[4717]: I0218 11:59:36.983633 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovn-acl-logging" containerID="cri-o://69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42" gracePeriod=30 Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.018102 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" containerID="cri-o://7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade" gracePeriod=30 Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.320369 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa is running failed: container process not found" containerID="2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.320475 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac is running failed: container process not found" containerID="81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.320761 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac is running failed: container process not found" containerID="81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.320857 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa is running failed: container process not found" containerID="2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.321002 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac is running failed: container process not found" containerID="81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.321051 4717 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="nbdb" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.321100 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa is running failed: container process not found" containerID="2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.321128 4717 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="sbdb" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.394003 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/3.log" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.395770 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovn-acl-logging/0.log" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.396155 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovn-controller/0.log" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.396552 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.454422 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kf57z"] Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.454776 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.454795 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.454804 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="sbdb" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.454811 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="sbdb" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.454824 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="kubecfg-setup" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.454831 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="kubecfg-setup" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.454838 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="nbdb" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.454844 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="nbdb" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.454855 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.454881 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.454910 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455575 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.455605 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovn-acl-logging" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455633 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovn-acl-logging" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.455644 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovn-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455650 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovn-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.455659 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="kube-rbac-proxy-node" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455665 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="kube-rbac-proxy-node" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.455673 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455680 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.455688 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="northd" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455694 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="northd" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455782 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455791 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="northd" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455799 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455806 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovn-acl-logging" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455818 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="kube-rbac-proxy-node" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455825 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="nbdb" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455833 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455839 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovn-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455846 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455853 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="sbdb" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.455939 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.455947 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.456031 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.456112 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.456119 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.456222 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerName="ovnkube-controller" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.457700 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561561 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-systemd\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561608 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-node-log\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561629 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-bin\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561648 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-ovn\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561683 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-netd\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561697 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-openvswitch\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561756 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-systemd-units\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561826 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561714 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-node-log" (OuterVolumeSpecName: "node-log") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561861 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-env-overrides\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561730 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561739 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561889 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-log-socket\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561922 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-config\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561759 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561940 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-kubelet\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561959 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-var-lib-openvswitch\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561996 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-script-lib\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562023 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-slash\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562036 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-etc-openvswitch\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562051 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-ovn-kubernetes\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562069 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-netns\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562093 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk2ht\" (UniqueName: \"kubernetes.io/projected/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-kube-api-access-xk2ht\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562124 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovn-node-metrics-cert\") pod \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\" (UID: \"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6\") " Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562251 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-systemd-units\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562302 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-run-ovn\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561941 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-log-socket" (OuterVolumeSpecName: "log-socket") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561791 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561816 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561912 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.561975 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562293 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562319 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562370 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562387 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562396 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562413 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562660 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562690 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-slash" (OuterVolumeSpecName: "host-slash") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562324 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-run-ovn-kubernetes\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562733 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-log-socket\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562759 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-node-log\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562788 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562913 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frgzs\" (UniqueName: \"kubernetes.io/projected/d27a097b-318a-421e-93e9-31ccbd231535-kube-api-access-frgzs\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562951 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d27a097b-318a-421e-93e9-31ccbd231535-ovnkube-config\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.562997 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-run-netns\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563018 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-var-lib-openvswitch\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563044 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-run-openvswitch\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563065 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-cni-bin\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563111 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d27a097b-318a-421e-93e9-31ccbd231535-ovn-node-metrics-cert\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563131 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-slash\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563164 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d27a097b-318a-421e-93e9-31ccbd231535-env-overrides\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563198 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-etc-openvswitch\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563214 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d27a097b-318a-421e-93e9-31ccbd231535-ovnkube-script-lib\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563230 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-run-systemd\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563248 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-cni-netd\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563293 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-kubelet\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563350 4717 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563362 4717 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563373 4717 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563382 4717 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563392 4717 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563403 4717 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563414 4717 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563426 4717 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563436 4717 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563445 4717 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563455 4717 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563464 4717 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563474 4717 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563484 4717 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563494 4717 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563505 4717 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.563515 4717 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.567564 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.567696 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-kube-api-access-xk2ht" (OuterVolumeSpecName: "kube-api-access-xk2ht") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "kube-api-access-xk2ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.575204 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" (UID: "26c6bcf7-2c2a-41bf-b76c-4f040f5693f6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.600429 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hvktx_41f72a5f-4820-4dc2-a6c5-243550881aaf/kube-multus/2.log" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.601105 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hvktx_41f72a5f-4820-4dc2-a6c5-243550881aaf/kube-multus/1.log" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.601147 4717 generic.go:334] "Generic (PLEG): container finished" podID="41f72a5f-4820-4dc2-a6c5-243550881aaf" containerID="99bf552605d0eed937374d50a80b61ae031663c7b68d2cb0435c94aa2c2469cd" exitCode=2 Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.601210 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hvktx" event={"ID":"41f72a5f-4820-4dc2-a6c5-243550881aaf","Type":"ContainerDied","Data":"99bf552605d0eed937374d50a80b61ae031663c7b68d2cb0435c94aa2c2469cd"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.601243 4717 scope.go:117] "RemoveContainer" containerID="bdc0e2a60a848f63de21d7e284f91b49cd6d637ce86ec545ed5cd2a8fb411579" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.601774 4717 scope.go:117] "RemoveContainer" containerID="99bf552605d0eed937374d50a80b61ae031663c7b68d2cb0435c94aa2c2469cd" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.602109 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hvktx_openshift-multus(41f72a5f-4820-4dc2-a6c5-243550881aaf)\"" pod="openshift-multus/multus-hvktx" podUID="41f72a5f-4820-4dc2-a6c5-243550881aaf" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.603392 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovnkube-controller/3.log" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.605939 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovn-acl-logging/0.log" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.606439 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2fh5s_26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/ovn-controller/0.log" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607660 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade" exitCode=0 Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607686 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" exitCode=0 Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607699 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" exitCode=0 Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607709 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102" exitCode=0 Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607719 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40" exitCode=0 Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607727 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c" exitCode=0 Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607735 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42" exitCode=143 Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607744 4717 generic.go:334] "Generic (PLEG): container finished" podID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" containerID="457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df" exitCode=143 Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607860 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607858 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607969 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.607987 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608001 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608015 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608029 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608042 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608055 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608062 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608069 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608076 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608083 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608089 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608096 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608103 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608110 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608119 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608130 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608137 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608144 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608150 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608157 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608163 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608170 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608176 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608183 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608189 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608199 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608208 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608216 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608225 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608231 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608237 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608244 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608250 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608256 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608281 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608288 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608297 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2fh5s" event={"ID":"26c6bcf7-2c2a-41bf-b76c-4f040f5693f6","Type":"ContainerDied","Data":"e0a81d52d8278381e5d0a4615c75f67ba9dd8e9572b004ab955e1ff5efcb91e3"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608308 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608316 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608322 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608329 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608335 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608341 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608347 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608353 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608359 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608366 4717 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519"} Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.608434 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-8dvlf" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.647031 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2fh5s"] Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.650809 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2fh5s"] Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.662385 4717 scope.go:117] "RemoveContainer" containerID="7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664346 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d27a097b-318a-421e-93e9-31ccbd231535-ovn-node-metrics-cert\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664425 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-slash\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664465 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d27a097b-318a-421e-93e9-31ccbd231535-env-overrides\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664494 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-etc-openvswitch\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664513 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d27a097b-318a-421e-93e9-31ccbd231535-ovnkube-script-lib\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664530 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-run-systemd\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664555 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-cni-netd\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664580 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-kubelet\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664605 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-systemd-units\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664628 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-run-ovn\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664625 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-etc-openvswitch\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664649 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-run-ovn-kubernetes\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664672 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-log-socket\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664681 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-cni-netd\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664699 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-run-ovn\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664625 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-slash\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664697 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-node-log\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664865 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664981 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frgzs\" (UniqueName: \"kubernetes.io/projected/d27a097b-318a-421e-93e9-31ccbd231535-kube-api-access-frgzs\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665108 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d27a097b-318a-421e-93e9-31ccbd231535-ovnkube-config\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665132 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664783 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-run-ovn-kubernetes\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664786 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-log-socket\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665185 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-run-netns\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664755 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-node-log\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664691 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-kubelet\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665277 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-var-lib-openvswitch\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665218 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-var-lib-openvswitch\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664743 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-systemd-units\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.664649 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-run-systemd\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665291 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-run-netns\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665368 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-run-openvswitch\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665461 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-run-openvswitch\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665512 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-cni-bin\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665610 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk2ht\" (UniqueName: \"kubernetes.io/projected/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-kube-api-access-xk2ht\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665615 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d27a097b-318a-421e-93e9-31ccbd231535-host-cni-bin\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665625 4717 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665667 4717 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665697 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d27a097b-318a-421e-93e9-31ccbd231535-ovnkube-script-lib\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.665863 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d27a097b-318a-421e-93e9-31ccbd231535-env-overrides\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.666169 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d27a097b-318a-421e-93e9-31ccbd231535-ovnkube-config\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.669105 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d27a097b-318a-421e-93e9-31ccbd231535-ovn-node-metrics-cert\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.683370 4717 scope.go:117] "RemoveContainer" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.685906 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frgzs\" (UniqueName: \"kubernetes.io/projected/d27a097b-318a-421e-93e9-31ccbd231535-kube-api-access-frgzs\") pod \"ovnkube-node-kf57z\" (UID: \"d27a097b-318a-421e-93e9-31ccbd231535\") " pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.699596 4717 scope.go:117] "RemoveContainer" containerID="2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.713282 4717 scope.go:117] "RemoveContainer" containerID="81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.727608 4717 scope.go:117] "RemoveContainer" containerID="816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.745604 4717 scope.go:117] "RemoveContainer" containerID="b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.758188 4717 scope.go:117] "RemoveContainer" containerID="431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.771905 4717 scope.go:117] "RemoveContainer" containerID="69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.771953 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.789076 4717 scope.go:117] "RemoveContainer" containerID="457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df" Feb 18 11:59:37 crc kubenswrapper[4717]: W0218 11:59:37.797538 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd27a097b_318a_421e_93e9_31ccbd231535.slice/crio-5d77ffceb4ba4879ca3ce3a503f2ff5e2bc1a121e96e74fbffc44fbd57322caa WatchSource:0}: Error finding container 5d77ffceb4ba4879ca3ce3a503f2ff5e2bc1a121e96e74fbffc44fbd57322caa: Status 404 returned error can't find the container with id 5d77ffceb4ba4879ca3ce3a503f2ff5e2bc1a121e96e74fbffc44fbd57322caa Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.802666 4717 scope.go:117] "RemoveContainer" containerID="c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.823772 4717 scope.go:117] "RemoveContainer" containerID="7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.824288 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade\": container with ID starting with 7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade not found: ID does not exist" containerID="7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.824330 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade"} err="failed to get container status \"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade\": rpc error: code = NotFound desc = could not find container \"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade\": container with ID starting with 7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.824360 4717 scope.go:117] "RemoveContainer" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.824744 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\": container with ID starting with 09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24 not found: ID does not exist" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.824772 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24"} err="failed to get container status \"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\": rpc error: code = NotFound desc = could not find container \"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\": container with ID starting with 09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.824796 4717 scope.go:117] "RemoveContainer" containerID="2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.825120 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\": container with ID starting with 2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa not found: ID does not exist" containerID="2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.825147 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa"} err="failed to get container status \"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\": rpc error: code = NotFound desc = could not find container \"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\": container with ID starting with 2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.825163 4717 scope.go:117] "RemoveContainer" containerID="81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.825397 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\": container with ID starting with 81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac not found: ID does not exist" containerID="81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.825423 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac"} err="failed to get container status \"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\": rpc error: code = NotFound desc = could not find container \"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\": container with ID starting with 81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.825438 4717 scope.go:117] "RemoveContainer" containerID="816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.825692 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\": container with ID starting with 816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102 not found: ID does not exist" containerID="816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.825711 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102"} err="failed to get container status \"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\": rpc error: code = NotFound desc = could not find container \"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\": container with ID starting with 816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.825725 4717 scope.go:117] "RemoveContainer" containerID="b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.826061 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\": container with ID starting with b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40 not found: ID does not exist" containerID="b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.826083 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40"} err="failed to get container status \"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\": rpc error: code = NotFound desc = could not find container \"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\": container with ID starting with b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.826098 4717 scope.go:117] "RemoveContainer" containerID="431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.826517 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\": container with ID starting with 431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c not found: ID does not exist" containerID="431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.826571 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c"} err="failed to get container status \"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\": rpc error: code = NotFound desc = could not find container \"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\": container with ID starting with 431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.826614 4717 scope.go:117] "RemoveContainer" containerID="69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.826926 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\": container with ID starting with 69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42 not found: ID does not exist" containerID="69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.826958 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42"} err="failed to get container status \"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\": rpc error: code = NotFound desc = could not find container \"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\": container with ID starting with 69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.826975 4717 scope.go:117] "RemoveContainer" containerID="457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.827249 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\": container with ID starting with 457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df not found: ID does not exist" containerID="457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.827287 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df"} err="failed to get container status \"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\": rpc error: code = NotFound desc = could not find container \"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\": container with ID starting with 457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.827302 4717 scope.go:117] "RemoveContainer" containerID="c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519" Feb 18 11:59:37 crc kubenswrapper[4717]: E0218 11:59:37.827587 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\": container with ID starting with c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519 not found: ID does not exist" containerID="c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.827615 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519"} err="failed to get container status \"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\": rpc error: code = NotFound desc = could not find container \"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\": container with ID starting with c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.827636 4717 scope.go:117] "RemoveContainer" containerID="7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.827923 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade"} err="failed to get container status \"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade\": rpc error: code = NotFound desc = could not find container \"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade\": container with ID starting with 7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.827946 4717 scope.go:117] "RemoveContainer" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.828295 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24"} err="failed to get container status \"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\": rpc error: code = NotFound desc = could not find container \"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\": container with ID starting with 09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.828351 4717 scope.go:117] "RemoveContainer" containerID="2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.828631 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa"} err="failed to get container status \"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\": rpc error: code = NotFound desc = could not find container \"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\": container with ID starting with 2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.828655 4717 scope.go:117] "RemoveContainer" containerID="81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.828875 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac"} err="failed to get container status \"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\": rpc error: code = NotFound desc = could not find container \"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\": container with ID starting with 81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.828893 4717 scope.go:117] "RemoveContainer" containerID="816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.829253 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102"} err="failed to get container status \"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\": rpc error: code = NotFound desc = could not find container \"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\": container with ID starting with 816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.829285 4717 scope.go:117] "RemoveContainer" containerID="b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.829506 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40"} err="failed to get container status \"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\": rpc error: code = NotFound desc = could not find container \"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\": container with ID starting with b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.829522 4717 scope.go:117] "RemoveContainer" containerID="431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.829809 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c"} err="failed to get container status \"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\": rpc error: code = NotFound desc = could not find container \"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\": container with ID starting with 431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.829831 4717 scope.go:117] "RemoveContainer" containerID="69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.830185 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42"} err="failed to get container status \"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\": rpc error: code = NotFound desc = could not find container \"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\": container with ID starting with 69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.830209 4717 scope.go:117] "RemoveContainer" containerID="457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.830451 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df"} err="failed to get container status \"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\": rpc error: code = NotFound desc = could not find container \"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\": container with ID starting with 457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.830476 4717 scope.go:117] "RemoveContainer" containerID="c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.830708 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519"} err="failed to get container status \"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\": rpc error: code = NotFound desc = could not find container \"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\": container with ID starting with c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.830734 4717 scope.go:117] "RemoveContainer" containerID="7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.830925 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade"} err="failed to get container status \"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade\": rpc error: code = NotFound desc = could not find container \"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade\": container with ID starting with 7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.830953 4717 scope.go:117] "RemoveContainer" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.831281 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24"} err="failed to get container status \"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\": rpc error: code = NotFound desc = could not find container \"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\": container with ID starting with 09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.831310 4717 scope.go:117] "RemoveContainer" containerID="2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.831532 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa"} err="failed to get container status \"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\": rpc error: code = NotFound desc = could not find container \"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\": container with ID starting with 2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.831553 4717 scope.go:117] "RemoveContainer" containerID="81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.831871 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac"} err="failed to get container status \"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\": rpc error: code = NotFound desc = could not find container \"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\": container with ID starting with 81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.831899 4717 scope.go:117] "RemoveContainer" containerID="816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.832197 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102"} err="failed to get container status \"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\": rpc error: code = NotFound desc = could not find container \"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\": container with ID starting with 816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.832220 4717 scope.go:117] "RemoveContainer" containerID="b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.832567 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40"} err="failed to get container status \"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\": rpc error: code = NotFound desc = could not find container \"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\": container with ID starting with b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.832597 4717 scope.go:117] "RemoveContainer" containerID="431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.832998 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c"} err="failed to get container status \"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\": rpc error: code = NotFound desc = could not find container \"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\": container with ID starting with 431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.833050 4717 scope.go:117] "RemoveContainer" containerID="69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.833427 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42"} err="failed to get container status \"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\": rpc error: code = NotFound desc = could not find container \"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\": container with ID starting with 69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.833448 4717 scope.go:117] "RemoveContainer" containerID="457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.833823 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df"} err="failed to get container status \"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\": rpc error: code = NotFound desc = could not find container \"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\": container with ID starting with 457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.833857 4717 scope.go:117] "RemoveContainer" containerID="c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.834085 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519"} err="failed to get container status \"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\": rpc error: code = NotFound desc = could not find container \"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\": container with ID starting with c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.834110 4717 scope.go:117] "RemoveContainer" containerID="7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.835150 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade"} err="failed to get container status \"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade\": rpc error: code = NotFound desc = could not find container \"7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade\": container with ID starting with 7a316de4098576736eae7a9ca55641a5453e78a1f3cfed8f19eadbbd409a7ade not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.835174 4717 scope.go:117] "RemoveContainer" containerID="09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.837891 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24"} err="failed to get container status \"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\": rpc error: code = NotFound desc = could not find container \"09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24\": container with ID starting with 09caa124fbee22fb5982cf627d71bca183772be789bbce35a04160d2bc125e24 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.837917 4717 scope.go:117] "RemoveContainer" containerID="2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.838157 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa"} err="failed to get container status \"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\": rpc error: code = NotFound desc = could not find container \"2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa\": container with ID starting with 2aaa4a23048f7f5bba114bb92b2196343dc777a34e449ee25ae76d5f3a0ededa not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.838179 4717 scope.go:117] "RemoveContainer" containerID="81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.838521 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac"} err="failed to get container status \"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\": rpc error: code = NotFound desc = could not find container \"81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac\": container with ID starting with 81b89c63bdc99184ebab0988df24d00fc30576039f9219834bcb6460c708eeac not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.838539 4717 scope.go:117] "RemoveContainer" containerID="816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.838778 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102"} err="failed to get container status \"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\": rpc error: code = NotFound desc = could not find container \"816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102\": container with ID starting with 816dd69789f41ebf32a5511b8c0c8ae1e095fd1247ae6a24acadc0c99b50d102 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.838799 4717 scope.go:117] "RemoveContainer" containerID="b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.838996 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40"} err="failed to get container status \"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\": rpc error: code = NotFound desc = could not find container \"b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40\": container with ID starting with b5e5db7e85c15bb45dd2370b8818cda07b8237e2ea88fcee36a1cf8c5f12da40 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.839023 4717 scope.go:117] "RemoveContainer" containerID="431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.839272 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c"} err="failed to get container status \"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\": rpc error: code = NotFound desc = could not find container \"431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c\": container with ID starting with 431c698200fa733862938b8299849e12f5330d5875220c32948f729233b9858c not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.839290 4717 scope.go:117] "RemoveContainer" containerID="69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.839592 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42"} err="failed to get container status \"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\": rpc error: code = NotFound desc = could not find container \"69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42\": container with ID starting with 69a7ed8b379be1b1b1d6a1d03875b00f26324113601d0251d1de12c859ffde42 not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.839629 4717 scope.go:117] "RemoveContainer" containerID="457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.839899 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df"} err="failed to get container status \"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\": rpc error: code = NotFound desc = could not find container \"457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df\": container with ID starting with 457acb2d8dc961f75ad08990bc5cd81cb2d83fbae8ea5d7a608b1dada1cc25df not found: ID does not exist" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.839919 4717 scope.go:117] "RemoveContainer" containerID="c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519" Feb 18 11:59:37 crc kubenswrapper[4717]: I0218 11:59:37.840159 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519"} err="failed to get container status \"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\": rpc error: code = NotFound desc = could not find container \"c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519\": container with ID starting with c379459856d8dcaa6723e1c5eefdbfc23393cefdcf79b15d2e2c80b5c170b519 not found: ID does not exist" Feb 18 11:59:38 crc kubenswrapper[4717]: I0218 11:59:38.617365 4717 generic.go:334] "Generic (PLEG): container finished" podID="d27a097b-318a-421e-93e9-31ccbd231535" containerID="62a28d288e85cc427b800ab2cdf81f0bd36329bc549d99c100059283b55a7487" exitCode=0 Feb 18 11:59:38 crc kubenswrapper[4717]: I0218 11:59:38.617497 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" event={"ID":"d27a097b-318a-421e-93e9-31ccbd231535","Type":"ContainerDied","Data":"62a28d288e85cc427b800ab2cdf81f0bd36329bc549d99c100059283b55a7487"} Feb 18 11:59:38 crc kubenswrapper[4717]: I0218 11:59:38.617534 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" event={"ID":"d27a097b-318a-421e-93e9-31ccbd231535","Type":"ContainerStarted","Data":"5d77ffceb4ba4879ca3ce3a503f2ff5e2bc1a121e96e74fbffc44fbd57322caa"} Feb 18 11:59:38 crc kubenswrapper[4717]: I0218 11:59:38.621269 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hvktx_41f72a5f-4820-4dc2-a6c5-243550881aaf/kube-multus/2.log" Feb 18 11:59:39 crc kubenswrapper[4717]: I0218 11:59:39.044838 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c6bcf7-2c2a-41bf-b76c-4f040f5693f6" path="/var/lib/kubelet/pods/26c6bcf7-2c2a-41bf-b76c-4f040f5693f6/volumes" Feb 18 11:59:39 crc kubenswrapper[4717]: I0218 11:59:39.632810 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" event={"ID":"d27a097b-318a-421e-93e9-31ccbd231535","Type":"ContainerStarted","Data":"41994c7de3efdc33c84c028e4fe014a02b524058a125be807d8898ea32786b6a"} Feb 18 11:59:39 crc kubenswrapper[4717]: I0218 11:59:39.632898 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" event={"ID":"d27a097b-318a-421e-93e9-31ccbd231535","Type":"ContainerStarted","Data":"1f6ef9cb59dd2eed8675c47963b8f63b6260a13ad1e54a958f2074ad8124c6fd"} Feb 18 11:59:39 crc kubenswrapper[4717]: I0218 11:59:39.632912 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" event={"ID":"d27a097b-318a-421e-93e9-31ccbd231535","Type":"ContainerStarted","Data":"e064787d02d2b1a445a22d80024d57ae6c26961a11a79b3fada0c7712970f262"} Feb 18 11:59:39 crc kubenswrapper[4717]: I0218 11:59:39.632924 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" event={"ID":"d27a097b-318a-421e-93e9-31ccbd231535","Type":"ContainerStarted","Data":"f3aa09dc9a02311b7edba492232ec2b372fd2933db5cc7394ae8c43904aec862"} Feb 18 11:59:39 crc kubenswrapper[4717]: I0218 11:59:39.632936 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" event={"ID":"d27a097b-318a-421e-93e9-31ccbd231535","Type":"ContainerStarted","Data":"1b87dbbe1414d89ded854080996702aa9c7c0f2816e7f6b29006bb3fdee2fc4c"} Feb 18 11:59:39 crc kubenswrapper[4717]: I0218 11:59:39.632947 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" event={"ID":"d27a097b-318a-421e-93e9-31ccbd231535","Type":"ContainerStarted","Data":"77f8f8639e6b8c1226335b6af1c230ab88d4bd7abca371dad013d68e7836d52e"} Feb 18 11:59:41 crc kubenswrapper[4717]: I0218 11:59:41.648592 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" event={"ID":"d27a097b-318a-421e-93e9-31ccbd231535","Type":"ContainerStarted","Data":"2a82e05a4ac02184ed52d6cd14fb3c6493f683d63a221ec71e5d7e074c2c0135"} Feb 18 11:59:42 crc kubenswrapper[4717]: I0218 11:59:42.750514 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-8dvlf" Feb 18 11:59:42 crc kubenswrapper[4717]: I0218 11:59:42.773831 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 11:59:42 crc kubenswrapper[4717]: I0218 11:59:42.773926 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 11:59:44 crc kubenswrapper[4717]: I0218 11:59:44.669841 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" event={"ID":"d27a097b-318a-421e-93e9-31ccbd231535","Type":"ContainerStarted","Data":"5b913fbf6aa4b9c908e4add5500c11f8360b84345cea481dba3eeaf6eaf223e4"} Feb 18 11:59:44 crc kubenswrapper[4717]: I0218 11:59:44.670298 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:44 crc kubenswrapper[4717]: I0218 11:59:44.670351 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:44 crc kubenswrapper[4717]: I0218 11:59:44.670370 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:44 crc kubenswrapper[4717]: I0218 11:59:44.697445 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:44 crc kubenswrapper[4717]: I0218 11:59:44.698528 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 11:59:44 crc kubenswrapper[4717]: I0218 11:59:44.702455 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" podStartSLOduration=7.702437041 podStartE2EDuration="7.702437041s" podCreationTimestamp="2026-02-18 11:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 11:59:44.700625588 +0000 UTC m=+619.102726914" watchObservedRunningTime="2026-02-18 11:59:44.702437041 +0000 UTC m=+619.104538357" Feb 18 11:59:50 crc kubenswrapper[4717]: I0218 11:59:50.036543 4717 scope.go:117] "RemoveContainer" containerID="99bf552605d0eed937374d50a80b61ae031663c7b68d2cb0435c94aa2c2469cd" Feb 18 11:59:50 crc kubenswrapper[4717]: E0218 11:59:50.037483 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-hvktx_openshift-multus(41f72a5f-4820-4dc2-a6c5-243550881aaf)\"" pod="openshift-multus/multus-hvktx" podUID="41f72a5f-4820-4dc2-a6c5-243550881aaf" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.164607 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk"] Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.166107 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.168352 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.169732 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk"] Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.169964 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.255957 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f3dc2ae-655b-4dfd-a294-22d48dce0867-secret-volume\") pod \"collect-profiles-29523600-fqhzk\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.256014 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3dc2ae-655b-4dfd-a294-22d48dce0867-config-volume\") pod \"collect-profiles-29523600-fqhzk\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.256051 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wjwx\" (UniqueName: \"kubernetes.io/projected/8f3dc2ae-655b-4dfd-a294-22d48dce0867-kube-api-access-8wjwx\") pod \"collect-profiles-29523600-fqhzk\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.357838 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f3dc2ae-655b-4dfd-a294-22d48dce0867-secret-volume\") pod \"collect-profiles-29523600-fqhzk\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.357901 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3dc2ae-655b-4dfd-a294-22d48dce0867-config-volume\") pod \"collect-profiles-29523600-fqhzk\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.357925 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wjwx\" (UniqueName: \"kubernetes.io/projected/8f3dc2ae-655b-4dfd-a294-22d48dce0867-kube-api-access-8wjwx\") pod \"collect-profiles-29523600-fqhzk\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.359149 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3dc2ae-655b-4dfd-a294-22d48dce0867-config-volume\") pod \"collect-profiles-29523600-fqhzk\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.373382 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f3dc2ae-655b-4dfd-a294-22d48dce0867-secret-volume\") pod \"collect-profiles-29523600-fqhzk\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.378902 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wjwx\" (UniqueName: \"kubernetes.io/projected/8f3dc2ae-655b-4dfd-a294-22d48dce0867-kube-api-access-8wjwx\") pod \"collect-profiles-29523600-fqhzk\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.486547 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: E0218 12:00:00.517668 4717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager_8f3dc2ae-655b-4dfd-a294-22d48dce0867_0(e8d7c4575a2338357f8c6cfcb9dd1c8ec312fa750c0eac41ef3d972571398d2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 12:00:00 crc kubenswrapper[4717]: E0218 12:00:00.518475 4717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager_8f3dc2ae-655b-4dfd-a294-22d48dce0867_0(e8d7c4575a2338357f8c6cfcb9dd1c8ec312fa750c0eac41ef3d972571398d2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: E0218 12:00:00.518546 4717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager_8f3dc2ae-655b-4dfd-a294-22d48dce0867_0(e8d7c4575a2338357f8c6cfcb9dd1c8ec312fa750c0eac41ef3d972571398d2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: E0218 12:00:00.518681 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager(8f3dc2ae-655b-4dfd-a294-22d48dce0867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager(8f3dc2ae-655b-4dfd-a294-22d48dce0867)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager_8f3dc2ae-655b-4dfd-a294-22d48dce0867_0(e8d7c4575a2338357f8c6cfcb9dd1c8ec312fa750c0eac41ef3d972571398d2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" podUID="8f3dc2ae-655b-4dfd-a294-22d48dce0867" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.761777 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: I0218 12:00:00.762368 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: E0218 12:00:00.794562 4717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager_8f3dc2ae-655b-4dfd-a294-22d48dce0867_0(123ae68d85d227be3e6285bde7c44c29958a84cdba06108f76bc7e09bc06946c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 12:00:00 crc kubenswrapper[4717]: E0218 12:00:00.794632 4717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager_8f3dc2ae-655b-4dfd-a294-22d48dce0867_0(123ae68d85d227be3e6285bde7c44c29958a84cdba06108f76bc7e09bc06946c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: E0218 12:00:00.794659 4717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager_8f3dc2ae-655b-4dfd-a294-22d48dce0867_0(123ae68d85d227be3e6285bde7c44c29958a84cdba06108f76bc7e09bc06946c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:00 crc kubenswrapper[4717]: E0218 12:00:00.794711 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager(8f3dc2ae-655b-4dfd-a294-22d48dce0867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager(8f3dc2ae-655b-4dfd-a294-22d48dce0867)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29523600-fqhzk_openshift-operator-lifecycle-manager_8f3dc2ae-655b-4dfd-a294-22d48dce0867_0(123ae68d85d227be3e6285bde7c44c29958a84cdba06108f76bc7e09bc06946c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" podUID="8f3dc2ae-655b-4dfd-a294-22d48dce0867" Feb 18 12:00:01 crc kubenswrapper[4717]: I0218 12:00:01.037008 4717 scope.go:117] "RemoveContainer" containerID="99bf552605d0eed937374d50a80b61ae031663c7b68d2cb0435c94aa2c2469cd" Feb 18 12:00:01 crc kubenswrapper[4717]: I0218 12:00:01.771517 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hvktx_41f72a5f-4820-4dc2-a6c5-243550881aaf/kube-multus/2.log" Feb 18 12:00:01 crc kubenswrapper[4717]: I0218 12:00:01.771605 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hvktx" event={"ID":"41f72a5f-4820-4dc2-a6c5-243550881aaf","Type":"ContainerStarted","Data":"2d37afdba3845135e2614d827ed5dfb0bf48cc9f031081331c42c0b99b066bd8"} Feb 18 12:00:07 crc kubenswrapper[4717]: I0218 12:00:07.801131 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kf57z" Feb 18 12:00:12 crc kubenswrapper[4717]: I0218 12:00:12.773176 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:00:12 crc kubenswrapper[4717]: I0218 12:00:12.774125 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:00:12 crc kubenswrapper[4717]: I0218 12:00:12.774208 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:00:12 crc kubenswrapper[4717]: I0218 12:00:12.775090 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dec67bbab9750d74b40d9a5adb8456536898db0fb4f915a1c596f22068aed83e"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:00:12 crc kubenswrapper[4717]: I0218 12:00:12.775164 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://dec67bbab9750d74b40d9a5adb8456536898db0fb4f915a1c596f22068aed83e" gracePeriod=600 Feb 18 12:00:13 crc kubenswrapper[4717]: I0218 12:00:13.845518 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="dec67bbab9750d74b40d9a5adb8456536898db0fb4f915a1c596f22068aed83e" exitCode=0 Feb 18 12:00:13 crc kubenswrapper[4717]: I0218 12:00:13.845586 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"dec67bbab9750d74b40d9a5adb8456536898db0fb4f915a1c596f22068aed83e"} Feb 18 12:00:13 crc kubenswrapper[4717]: I0218 12:00:13.846544 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"11a1c75eda22e757819ca65e0602c28b288a19c1473f7585c0555728262bdcd4"} Feb 18 12:00:13 crc kubenswrapper[4717]: I0218 12:00:13.846571 4717 scope.go:117] "RemoveContainer" containerID="94e1dd922b2fb688b0271b6031b400dcd65ef629725f95285a8e4f6c5fddd404" Feb 18 12:00:14 crc kubenswrapper[4717]: I0218 12:00:14.036614 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:14 crc kubenswrapper[4717]: I0218 12:00:14.037654 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:14 crc kubenswrapper[4717]: I0218 12:00:14.251654 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk"] Feb 18 12:00:14 crc kubenswrapper[4717]: I0218 12:00:14.854149 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" event={"ID":"8f3dc2ae-655b-4dfd-a294-22d48dce0867","Type":"ContainerStarted","Data":"cf932904573e6b18d4e98e1748eec0f1b21b3568ff79037607e977728c8e8011"} Feb 18 12:00:14 crc kubenswrapper[4717]: I0218 12:00:14.855064 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" event={"ID":"8f3dc2ae-655b-4dfd-a294-22d48dce0867","Type":"ContainerStarted","Data":"bb1342c51a2c55d070c7716b1d4f47d53216fb07a499bd2599b5a2a07621b4c6"} Feb 18 12:00:15 crc kubenswrapper[4717]: I0218 12:00:15.867126 4717 generic.go:334] "Generic (PLEG): container finished" podID="8f3dc2ae-655b-4dfd-a294-22d48dce0867" containerID="cf932904573e6b18d4e98e1748eec0f1b21b3568ff79037607e977728c8e8011" exitCode=0 Feb 18 12:00:15 crc kubenswrapper[4717]: I0218 12:00:15.867638 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" event={"ID":"8f3dc2ae-655b-4dfd-a294-22d48dce0867","Type":"ContainerDied","Data":"cf932904573e6b18d4e98e1748eec0f1b21b3568ff79037607e977728c8e8011"} Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.107050 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.183995 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wjwx\" (UniqueName: \"kubernetes.io/projected/8f3dc2ae-655b-4dfd-a294-22d48dce0867-kube-api-access-8wjwx\") pod \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.184464 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f3dc2ae-655b-4dfd-a294-22d48dce0867-secret-volume\") pod \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.184531 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3dc2ae-655b-4dfd-a294-22d48dce0867-config-volume\") pod \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\" (UID: \"8f3dc2ae-655b-4dfd-a294-22d48dce0867\") " Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.185341 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f3dc2ae-655b-4dfd-a294-22d48dce0867-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f3dc2ae-655b-4dfd-a294-22d48dce0867" (UID: "8f3dc2ae-655b-4dfd-a294-22d48dce0867"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.190479 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3dc2ae-655b-4dfd-a294-22d48dce0867-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f3dc2ae-655b-4dfd-a294-22d48dce0867" (UID: "8f3dc2ae-655b-4dfd-a294-22d48dce0867"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.192432 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3dc2ae-655b-4dfd-a294-22d48dce0867-kube-api-access-8wjwx" (OuterVolumeSpecName: "kube-api-access-8wjwx") pod "8f3dc2ae-655b-4dfd-a294-22d48dce0867" (UID: "8f3dc2ae-655b-4dfd-a294-22d48dce0867"). InnerVolumeSpecName "kube-api-access-8wjwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.293738 4717 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f3dc2ae-655b-4dfd-a294-22d48dce0867-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.293804 4717 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f3dc2ae-655b-4dfd-a294-22d48dce0867-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.293820 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wjwx\" (UniqueName: \"kubernetes.io/projected/8f3dc2ae-655b-4dfd-a294-22d48dce0867-kube-api-access-8wjwx\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.879826 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" event={"ID":"8f3dc2ae-655b-4dfd-a294-22d48dce0867","Type":"ContainerDied","Data":"bb1342c51a2c55d070c7716b1d4f47d53216fb07a499bd2599b5a2a07621b4c6"} Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.880202 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb1342c51a2c55d070c7716b1d4f47d53216fb07a499bd2599b5a2a07621b4c6" Feb 18 12:00:17 crc kubenswrapper[4717]: I0218 12:00:17.879919 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk" Feb 18 12:00:24 crc kubenswrapper[4717]: I0218 12:00:24.848862 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd"] Feb 18 12:00:24 crc kubenswrapper[4717]: E0218 12:00:24.850399 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3dc2ae-655b-4dfd-a294-22d48dce0867" containerName="collect-profiles" Feb 18 12:00:24 crc kubenswrapper[4717]: I0218 12:00:24.850465 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3dc2ae-655b-4dfd-a294-22d48dce0867" containerName="collect-profiles" Feb 18 12:00:24 crc kubenswrapper[4717]: I0218 12:00:24.850617 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3dc2ae-655b-4dfd-a294-22d48dce0867" containerName="collect-profiles" Feb 18 12:00:24 crc kubenswrapper[4717]: I0218 12:00:24.851346 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:24 crc kubenswrapper[4717]: I0218 12:00:24.854734 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 12:00:24 crc kubenswrapper[4717]: I0218 12:00:24.862719 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd"] Feb 18 12:00:24 crc kubenswrapper[4717]: I0218 12:00:24.944563 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:24 crc kubenswrapper[4717]: I0218 12:00:24.944627 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:24 crc kubenswrapper[4717]: I0218 12:00:24.944684 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r85hn\" (UniqueName: \"kubernetes.io/projected/ce95cdae-125f-4394-8f29-8d718f8297c4-kube-api-access-r85hn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:25 crc kubenswrapper[4717]: I0218 12:00:25.045601 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:25 crc kubenswrapper[4717]: I0218 12:00:25.045685 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:25 crc kubenswrapper[4717]: I0218 12:00:25.045739 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r85hn\" (UniqueName: \"kubernetes.io/projected/ce95cdae-125f-4394-8f29-8d718f8297c4-kube-api-access-r85hn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:25 crc kubenswrapper[4717]: I0218 12:00:25.046360 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:25 crc kubenswrapper[4717]: I0218 12:00:25.046425 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:25 crc kubenswrapper[4717]: I0218 12:00:25.068637 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r85hn\" (UniqueName: \"kubernetes.io/projected/ce95cdae-125f-4394-8f29-8d718f8297c4-kube-api-access-r85hn\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:25 crc kubenswrapper[4717]: I0218 12:00:25.166536 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:25 crc kubenswrapper[4717]: I0218 12:00:25.611695 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd"] Feb 18 12:00:25 crc kubenswrapper[4717]: I0218 12:00:25.933998 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" event={"ID":"ce95cdae-125f-4394-8f29-8d718f8297c4","Type":"ContainerStarted","Data":"90f430f4f8e41117086c53f90c210043368dd5e17445d21843f4f809c59ff1eb"} Feb 18 12:00:25 crc kubenswrapper[4717]: I0218 12:00:25.934477 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" event={"ID":"ce95cdae-125f-4394-8f29-8d718f8297c4","Type":"ContainerStarted","Data":"8ff829275106711c4436d4e2d6e5d6468e33f383255b49904fc56a8d19eedcee"} Feb 18 12:00:26 crc kubenswrapper[4717]: I0218 12:00:26.941872 4717 generic.go:334] "Generic (PLEG): container finished" podID="ce95cdae-125f-4394-8f29-8d718f8297c4" containerID="90f430f4f8e41117086c53f90c210043368dd5e17445d21843f4f809c59ff1eb" exitCode=0 Feb 18 12:00:26 crc kubenswrapper[4717]: I0218 12:00:26.941938 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" event={"ID":"ce95cdae-125f-4394-8f29-8d718f8297c4","Type":"ContainerDied","Data":"90f430f4f8e41117086c53f90c210043368dd5e17445d21843f4f809c59ff1eb"} Feb 18 12:00:28 crc kubenswrapper[4717]: I0218 12:00:28.953236 4717 generic.go:334] "Generic (PLEG): container finished" podID="ce95cdae-125f-4394-8f29-8d718f8297c4" containerID="b4d7b372492a2b0d63cfaadc7e0ba65245dd7dbb35469c28faddd259fb29b6ae" exitCode=0 Feb 18 12:00:28 crc kubenswrapper[4717]: I0218 12:00:28.953547 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" event={"ID":"ce95cdae-125f-4394-8f29-8d718f8297c4","Type":"ContainerDied","Data":"b4d7b372492a2b0d63cfaadc7e0ba65245dd7dbb35469c28faddd259fb29b6ae"} Feb 18 12:00:29 crc kubenswrapper[4717]: I0218 12:00:29.962093 4717 generic.go:334] "Generic (PLEG): container finished" podID="ce95cdae-125f-4394-8f29-8d718f8297c4" containerID="2a453bad2eeb43d8bfc7b8cfa5df08dbb4af084b79e6dc0cb0ad0d2052c65a37" exitCode=0 Feb 18 12:00:29 crc kubenswrapper[4717]: I0218 12:00:29.962150 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" event={"ID":"ce95cdae-125f-4394-8f29-8d718f8297c4","Type":"ContainerDied","Data":"2a453bad2eeb43d8bfc7b8cfa5df08dbb4af084b79e6dc0cb0ad0d2052c65a37"} Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.184585 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.323506 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-bundle\") pod \"ce95cdae-125f-4394-8f29-8d718f8297c4\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.323616 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-util\") pod \"ce95cdae-125f-4394-8f29-8d718f8297c4\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.323704 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r85hn\" (UniqueName: \"kubernetes.io/projected/ce95cdae-125f-4394-8f29-8d718f8297c4-kube-api-access-r85hn\") pod \"ce95cdae-125f-4394-8f29-8d718f8297c4\" (UID: \"ce95cdae-125f-4394-8f29-8d718f8297c4\") " Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.324730 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-bundle" (OuterVolumeSpecName: "bundle") pod "ce95cdae-125f-4394-8f29-8d718f8297c4" (UID: "ce95cdae-125f-4394-8f29-8d718f8297c4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.331634 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce95cdae-125f-4394-8f29-8d718f8297c4-kube-api-access-r85hn" (OuterVolumeSpecName: "kube-api-access-r85hn") pod "ce95cdae-125f-4394-8f29-8d718f8297c4" (UID: "ce95cdae-125f-4394-8f29-8d718f8297c4"). InnerVolumeSpecName "kube-api-access-r85hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.425447 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r85hn\" (UniqueName: \"kubernetes.io/projected/ce95cdae-125f-4394-8f29-8d718f8297c4-kube-api-access-r85hn\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.425486 4717 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.840707 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-util" (OuterVolumeSpecName: "util") pod "ce95cdae-125f-4394-8f29-8d718f8297c4" (UID: "ce95cdae-125f-4394-8f29-8d718f8297c4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.930625 4717 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce95cdae-125f-4394-8f29-8d718f8297c4-util\") on node \"crc\" DevicePath \"\"" Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.973597 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" event={"ID":"ce95cdae-125f-4394-8f29-8d718f8297c4","Type":"ContainerDied","Data":"8ff829275106711c4436d4e2d6e5d6468e33f383255b49904fc56a8d19eedcee"} Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.973646 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ff829275106711c4436d4e2d6e5d6468e33f383255b49904fc56a8d19eedcee" Feb 18 12:00:31 crc kubenswrapper[4717]: I0218 12:00:31.973721 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.508684 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-sj6bs"] Feb 18 12:00:33 crc kubenswrapper[4717]: E0218 12:00:33.510122 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce95cdae-125f-4394-8f29-8d718f8297c4" containerName="util" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.510194 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce95cdae-125f-4394-8f29-8d718f8297c4" containerName="util" Feb 18 12:00:33 crc kubenswrapper[4717]: E0218 12:00:33.510293 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce95cdae-125f-4394-8f29-8d718f8297c4" containerName="pull" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.510354 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce95cdae-125f-4394-8f29-8d718f8297c4" containerName="pull" Feb 18 12:00:33 crc kubenswrapper[4717]: E0218 12:00:33.510414 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce95cdae-125f-4394-8f29-8d718f8297c4" containerName="extract" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.510462 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce95cdae-125f-4394-8f29-8d718f8297c4" containerName="extract" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.510633 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce95cdae-125f-4394-8f29-8d718f8297c4" containerName="extract" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.511226 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-sj6bs" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.514761 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.515583 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.516912 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-hh84z" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.522607 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-sj6bs"] Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.652963 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt26h\" (UniqueName: \"kubernetes.io/projected/42edbbd9-e0db-4a1f-b9fc-c0987cae7f48-kube-api-access-rt26h\") pod \"nmstate-operator-694c9596b7-sj6bs\" (UID: \"42edbbd9-e0db-4a1f-b9fc-c0987cae7f48\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-sj6bs" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.753814 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt26h\" (UniqueName: \"kubernetes.io/projected/42edbbd9-e0db-4a1f-b9fc-c0987cae7f48-kube-api-access-rt26h\") pod \"nmstate-operator-694c9596b7-sj6bs\" (UID: \"42edbbd9-e0db-4a1f-b9fc-c0987cae7f48\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-sj6bs" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.792282 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt26h\" (UniqueName: \"kubernetes.io/projected/42edbbd9-e0db-4a1f-b9fc-c0987cae7f48-kube-api-access-rt26h\") pod \"nmstate-operator-694c9596b7-sj6bs\" (UID: \"42edbbd9-e0db-4a1f-b9fc-c0987cae7f48\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-sj6bs" Feb 18 12:00:33 crc kubenswrapper[4717]: I0218 12:00:33.830523 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-sj6bs" Feb 18 12:00:34 crc kubenswrapper[4717]: I0218 12:00:34.092135 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-sj6bs"] Feb 18 12:00:34 crc kubenswrapper[4717]: W0218 12:00:34.096315 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42edbbd9_e0db_4a1f_b9fc_c0987cae7f48.slice/crio-c790bdb375c1c7fc4a795c49c2b5fb9f57b2e66166aaea97847e905f1de2a449 WatchSource:0}: Error finding container c790bdb375c1c7fc4a795c49c2b5fb9f57b2e66166aaea97847e905f1de2a449: Status 404 returned error can't find the container with id c790bdb375c1c7fc4a795c49c2b5fb9f57b2e66166aaea97847e905f1de2a449 Feb 18 12:00:34 crc kubenswrapper[4717]: I0218 12:00:34.991804 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-sj6bs" event={"ID":"42edbbd9-e0db-4a1f-b9fc-c0987cae7f48","Type":"ContainerStarted","Data":"c790bdb375c1c7fc4a795c49c2b5fb9f57b2e66166aaea97847e905f1de2a449"} Feb 18 12:00:37 crc kubenswrapper[4717]: I0218 12:00:37.009402 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-sj6bs" event={"ID":"42edbbd9-e0db-4a1f-b9fc-c0987cae7f48","Type":"ContainerStarted","Data":"7daa188eb51c8a8e107a69d28a0ccd6ab3e305b1d1a096960d79cc645e1127b4"} Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.739897 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-sj6bs" podStartSLOduration=9.650264689 podStartE2EDuration="11.739875474s" podCreationTimestamp="2026-02-18 12:00:33 +0000 UTC" firstStartedPulling="2026-02-18 12:00:34.09898301 +0000 UTC m=+668.501084326" lastFinishedPulling="2026-02-18 12:00:36.188593795 +0000 UTC m=+670.590695111" observedRunningTime="2026-02-18 12:00:37.034864176 +0000 UTC m=+671.436965492" watchObservedRunningTime="2026-02-18 12:00:44.739875474 +0000 UTC m=+679.141976790" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.743057 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk"] Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.743983 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.747038 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-4z6fx" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.755182 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk"] Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.780340 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-j27f6"] Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.781328 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.786097 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48"] Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.786996 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.792071 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.797080 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48"] Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.881424 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4"] Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.882118 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.883892 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.884002 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vpgdf" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.884818 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.891615 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4"] Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.937090 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-nmstate-lock\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.937145 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbj44\" (UniqueName: \"kubernetes.io/projected/7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d-kube-api-access-pbj44\") pod \"nmstate-webhook-866bcb46dc-4xf48\" (UID: \"7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.937181 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-ovs-socket\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.937255 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-z9wv4\" (UID: \"f80bcf06-9be6-4c29-9ed7-d575837ff0d6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.937349 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zld8j\" (UniqueName: \"kubernetes.io/projected/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-kube-api-access-zld8j\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.937370 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb74x\" (UniqueName: \"kubernetes.io/projected/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-kube-api-access-gb74x\") pod \"nmstate-console-plugin-5c78fc5d65-z9wv4\" (UID: \"f80bcf06-9be6-4c29-9ed7-d575837ff0d6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.937396 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-z9wv4\" (UID: \"f80bcf06-9be6-4c29-9ed7-d575837ff0d6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.937430 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-dbus-socket\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.937512 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-4xf48\" (UID: \"7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" Feb 18 12:00:44 crc kubenswrapper[4717]: I0218 12:00:44.937550 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2lng\" (UniqueName: \"kubernetes.io/projected/83d51357-d0dc-4297-9449-a066463019f7-kube-api-access-m2lng\") pod \"nmstate-metrics-58c85c668d-nb8mk\" (UID: \"83d51357-d0dc-4297-9449-a066463019f7\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.038385 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zld8j\" (UniqueName: \"kubernetes.io/projected/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-kube-api-access-zld8j\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.038432 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb74x\" (UniqueName: \"kubernetes.io/projected/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-kube-api-access-gb74x\") pod \"nmstate-console-plugin-5c78fc5d65-z9wv4\" (UID: \"f80bcf06-9be6-4c29-9ed7-d575837ff0d6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.038455 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-z9wv4\" (UID: \"f80bcf06-9be6-4c29-9ed7-d575837ff0d6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.038492 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-dbus-socket\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.038517 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-4xf48\" (UID: \"7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.038535 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2lng\" (UniqueName: \"kubernetes.io/projected/83d51357-d0dc-4297-9449-a066463019f7-kube-api-access-m2lng\") pod \"nmstate-metrics-58c85c668d-nb8mk\" (UID: \"83d51357-d0dc-4297-9449-a066463019f7\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.038568 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-nmstate-lock\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.038588 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbj44\" (UniqueName: \"kubernetes.io/projected/7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d-kube-api-access-pbj44\") pod \"nmstate-webhook-866bcb46dc-4xf48\" (UID: \"7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.038608 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-ovs-socket\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.038627 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-z9wv4\" (UID: \"f80bcf06-9be6-4c29-9ed7-d575837ff0d6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:45 crc kubenswrapper[4717]: E0218 12:00:45.039142 4717 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 18 12:00:45 crc kubenswrapper[4717]: E0218 12:00:45.039200 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-plugin-serving-cert podName:f80bcf06-9be6-4c29-9ed7-d575837ff0d6 nodeName:}" failed. No retries permitted until 2026-02-18 12:00:45.539183662 +0000 UTC m=+679.941284988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-z9wv4" (UID: "f80bcf06-9be6-4c29-9ed7-d575837ff0d6") : secret "plugin-serving-cert" not found Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.039575 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-z9wv4\" (UID: \"f80bcf06-9be6-4c29-9ed7-d575837ff0d6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.039719 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-dbus-socket\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.039782 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-ovs-socket\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.039826 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-nmstate-lock\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.046646 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-4xf48\" (UID: \"7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.056933 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2lng\" (UniqueName: \"kubernetes.io/projected/83d51357-d0dc-4297-9449-a066463019f7-kube-api-access-m2lng\") pod \"nmstate-metrics-58c85c668d-nb8mk\" (UID: \"83d51357-d0dc-4297-9449-a066463019f7\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.057336 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb74x\" (UniqueName: \"kubernetes.io/projected/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-kube-api-access-gb74x\") pod \"nmstate-console-plugin-5c78fc5d65-z9wv4\" (UID: \"f80bcf06-9be6-4c29-9ed7-d575837ff0d6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.062110 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbj44\" (UniqueName: \"kubernetes.io/projected/7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d-kube-api-access-pbj44\") pod \"nmstate-webhook-866bcb46dc-4xf48\" (UID: \"7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.063496 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.064103 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zld8j\" (UniqueName: \"kubernetes.io/projected/58fee9d2-2e42-46e4-b5a2-8b8c80a52424-kube-api-access-zld8j\") pod \"nmstate-handler-j27f6\" (UID: \"58fee9d2-2e42-46e4-b5a2-8b8c80a52424\") " pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.102732 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.111512 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.113914 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65985b9b7-4kd2k"] Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.114724 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.137107 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65985b9b7-4kd2k"] Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.240478 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsrft\" (UniqueName: \"kubernetes.io/projected/41e357ac-2781-4e81-b21d-2291fc46eb3c-kube-api-access-bsrft\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.240530 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-console-config\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.240560 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-service-ca\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.240602 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-oauth-serving-cert\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.240623 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41e357ac-2781-4e81-b21d-2291fc46eb3c-console-serving-cert\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.240731 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-trusted-ca-bundle\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.240770 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41e357ac-2781-4e81-b21d-2291fc46eb3c-console-oauth-config\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.345213 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-oauth-serving-cert\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.345287 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41e357ac-2781-4e81-b21d-2291fc46eb3c-console-serving-cert\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.345332 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-trusted-ca-bundle\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.345353 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41e357ac-2781-4e81-b21d-2291fc46eb3c-console-oauth-config\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.345405 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsrft\" (UniqueName: \"kubernetes.io/projected/41e357ac-2781-4e81-b21d-2291fc46eb3c-kube-api-access-bsrft\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.345435 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-console-config\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.345467 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-service-ca\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.346583 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-service-ca\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.346674 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-oauth-serving-cert\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.347114 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-console-config\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.351276 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41e357ac-2781-4e81-b21d-2291fc46eb3c-console-oauth-config\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.351561 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41e357ac-2781-4e81-b21d-2291fc46eb3c-console-serving-cert\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.351862 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e357ac-2781-4e81-b21d-2291fc46eb3c-trusted-ca-bundle\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.365063 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsrft\" (UniqueName: \"kubernetes.io/projected/41e357ac-2781-4e81-b21d-2291fc46eb3c-kube-api-access-bsrft\") pod \"console-65985b9b7-4kd2k\" (UID: \"41e357ac-2781-4e81-b21d-2291fc46eb3c\") " pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.447825 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48"] Feb 18 12:00:45 crc kubenswrapper[4717]: W0218 12:00:45.452418 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f6c194d_4ed9_4ab8_af9c_bb9c44324b0d.slice/crio-4d1e9b368c5b41856f2ffa76abb581a70bec009a945c92216825e9888a6870fb WatchSource:0}: Error finding container 4d1e9b368c5b41856f2ffa76abb581a70bec009a945c92216825e9888a6870fb: Status 404 returned error can't find the container with id 4d1e9b368c5b41856f2ffa76abb581a70bec009a945c92216825e9888a6870fb Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.496706 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.547625 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-z9wv4\" (UID: \"f80bcf06-9be6-4c29-9ed7-d575837ff0d6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.551960 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f80bcf06-9be6-4c29-9ed7-d575837ff0d6-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-z9wv4\" (UID: \"f80bcf06-9be6-4c29-9ed7-d575837ff0d6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.708454 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk"] Feb 18 12:00:45 crc kubenswrapper[4717]: W0218 12:00:45.716059 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d51357_d0dc_4297_9449_a066463019f7.slice/crio-b4c056bb32ed8a70c0accf53bd61d025aae716011b0116dc15c79fc0177d739d WatchSource:0}: Error finding container b4c056bb32ed8a70c0accf53bd61d025aae716011b0116dc15c79fc0177d739d: Status 404 returned error can't find the container with id b4c056bb32ed8a70c0accf53bd61d025aae716011b0116dc15c79fc0177d739d Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.761212 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65985b9b7-4kd2k"] Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.796331 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" Feb 18 12:00:45 crc kubenswrapper[4717]: I0218 12:00:45.995979 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4"] Feb 18 12:00:46 crc kubenswrapper[4717]: I0218 12:00:46.064926 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" event={"ID":"7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d","Type":"ContainerStarted","Data":"4d1e9b368c5b41856f2ffa76abb581a70bec009a945c92216825e9888a6870fb"} Feb 18 12:00:46 crc kubenswrapper[4717]: I0218 12:00:46.065808 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk" event={"ID":"83d51357-d0dc-4297-9449-a066463019f7","Type":"ContainerStarted","Data":"b4c056bb32ed8a70c0accf53bd61d025aae716011b0116dc15c79fc0177d739d"} Feb 18 12:00:46 crc kubenswrapper[4717]: I0218 12:00:46.066541 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" event={"ID":"f80bcf06-9be6-4c29-9ed7-d575837ff0d6","Type":"ContainerStarted","Data":"1accd33dcf47fb5569aee96348595f903765bac36a8c85a6e9d91f2d15d246f4"} Feb 18 12:00:46 crc kubenswrapper[4717]: I0218 12:00:46.067597 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65985b9b7-4kd2k" event={"ID":"41e357ac-2781-4e81-b21d-2291fc46eb3c","Type":"ContainerStarted","Data":"b4ca0eca3a4050a93642a195e25ae5341c675b8b25e5764825db9d4a02e196f3"} Feb 18 12:00:46 crc kubenswrapper[4717]: I0218 12:00:46.071945 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-j27f6" event={"ID":"58fee9d2-2e42-46e4-b5a2-8b8c80a52424","Type":"ContainerStarted","Data":"068d5a350bab1e516a115aa9974e78db5ebd5cad389448f2d49a34a469f50a9c"} Feb 18 12:00:47 crc kubenswrapper[4717]: I0218 12:00:47.080185 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65985b9b7-4kd2k" event={"ID":"41e357ac-2781-4e81-b21d-2291fc46eb3c","Type":"ContainerStarted","Data":"7dc2d10e74b5b9d3bf54d8583a8189639b2397d0dc82ed5b6381ddb0c6a09616"} Feb 18 12:00:47 crc kubenswrapper[4717]: I0218 12:00:47.125393 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65985b9b7-4kd2k" podStartSLOduration=2.125376877 podStartE2EDuration="2.125376877s" podCreationTimestamp="2026-02-18 12:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:00:47.123569934 +0000 UTC m=+681.525671250" watchObservedRunningTime="2026-02-18 12:00:47.125376877 +0000 UTC m=+681.527478193" Feb 18 12:00:50 crc kubenswrapper[4717]: I0218 12:00:50.116391 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" event={"ID":"7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d","Type":"ContainerStarted","Data":"0d14da2024ec9b5f95529a2cf1ddaf3b732e5780248e2b25e8f85698568abb04"} Feb 18 12:00:50 crc kubenswrapper[4717]: I0218 12:00:50.119422 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" event={"ID":"f80bcf06-9be6-4c29-9ed7-d575837ff0d6","Type":"ContainerStarted","Data":"b64d646b0e2fe1cdae1fbb2db4fcb48ce42dccdaf4a429523adb7935a9f77739"} Feb 18 12:00:50 crc kubenswrapper[4717]: I0218 12:00:50.119479 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" Feb 18 12:00:50 crc kubenswrapper[4717]: I0218 12:00:50.121681 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-j27f6" event={"ID":"58fee9d2-2e42-46e4-b5a2-8b8c80a52424","Type":"ContainerStarted","Data":"37c9901d13cb1506652144db559ecdbf3212764551f555cd7f9cba34b049cfeb"} Feb 18 12:00:50 crc kubenswrapper[4717]: I0218 12:00:50.121796 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:50 crc kubenswrapper[4717]: I0218 12:00:50.124763 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk" event={"ID":"83d51357-d0dc-4297-9449-a066463019f7","Type":"ContainerStarted","Data":"74dfe82b61321212ad2820cc798d626019134128eed5e95d5b757c1a78725cc4"} Feb 18 12:00:50 crc kubenswrapper[4717]: I0218 12:00:50.172916 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" podStartSLOduration=2.710113808 podStartE2EDuration="6.172856326s" podCreationTimestamp="2026-02-18 12:00:44 +0000 UTC" firstStartedPulling="2026-02-18 12:00:45.455389158 +0000 UTC m=+679.857490474" lastFinishedPulling="2026-02-18 12:00:48.918131676 +0000 UTC m=+683.320232992" observedRunningTime="2026-02-18 12:00:50.134873892 +0000 UTC m=+684.536975228" watchObservedRunningTime="2026-02-18 12:00:50.172856326 +0000 UTC m=+684.574957642" Feb 18 12:00:50 crc kubenswrapper[4717]: I0218 12:00:50.182132 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-j27f6" podStartSLOduration=2.446958401 podStartE2EDuration="6.182105037s" podCreationTimestamp="2026-02-18 12:00:44 +0000 UTC" firstStartedPulling="2026-02-18 12:00:45.171143883 +0000 UTC m=+679.573245199" lastFinishedPulling="2026-02-18 12:00:48.906290509 +0000 UTC m=+683.308391835" observedRunningTime="2026-02-18 12:00:50.176115271 +0000 UTC m=+684.578216587" watchObservedRunningTime="2026-02-18 12:00:50.182105037 +0000 UTC m=+684.584206353" Feb 18 12:00:50 crc kubenswrapper[4717]: I0218 12:00:50.208136 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-z9wv4" podStartSLOduration=3.2827371530000002 podStartE2EDuration="6.20810279s" podCreationTimestamp="2026-02-18 12:00:44 +0000 UTC" firstStartedPulling="2026-02-18 12:00:46.002045111 +0000 UTC m=+680.404146417" lastFinishedPulling="2026-02-18 12:00:48.927410728 +0000 UTC m=+683.329512054" observedRunningTime="2026-02-18 12:00:50.194388047 +0000 UTC m=+684.596489403" watchObservedRunningTime="2026-02-18 12:00:50.20810279 +0000 UTC m=+684.610204096" Feb 18 12:00:52 crc kubenswrapper[4717]: I0218 12:00:52.143040 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk" event={"ID":"83d51357-d0dc-4297-9449-a066463019f7","Type":"ContainerStarted","Data":"0705eaebecb3af018ed757bc298727b63188d76df9c3639d41d77378d4bb1398"} Feb 18 12:00:52 crc kubenswrapper[4717]: I0218 12:00:52.165440 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-nb8mk" podStartSLOduration=2.443676864 podStartE2EDuration="8.165410415s" podCreationTimestamp="2026-02-18 12:00:44 +0000 UTC" firstStartedPulling="2026-02-18 12:00:45.719987819 +0000 UTC m=+680.122089145" lastFinishedPulling="2026-02-18 12:00:51.44172138 +0000 UTC m=+685.843822696" observedRunningTime="2026-02-18 12:00:52.160866232 +0000 UTC m=+686.562967548" watchObservedRunningTime="2026-02-18 12:00:52.165410415 +0000 UTC m=+686.567511731" Feb 18 12:00:55 crc kubenswrapper[4717]: I0218 12:00:55.128988 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-j27f6" Feb 18 12:00:55 crc kubenswrapper[4717]: I0218 12:00:55.497463 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:55 crc kubenswrapper[4717]: I0218 12:00:55.497634 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:55 crc kubenswrapper[4717]: I0218 12:00:55.502314 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:56 crc kubenswrapper[4717]: I0218 12:00:56.175599 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65985b9b7-4kd2k" Feb 18 12:00:56 crc kubenswrapper[4717]: I0218 12:00:56.229577 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mk6cn"] Feb 18 12:01:05 crc kubenswrapper[4717]: I0218 12:01:05.117733 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-4xf48" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.002281 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd"] Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.004695 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.007003 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.020748 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd"] Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.102495 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjrpf\" (UniqueName: \"kubernetes.io/projected/3a791e8a-cfda-4171-8b8b-1828dcae5419-kube-api-access-rjrpf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.102615 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.102646 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.204383 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.204435 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.204494 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjrpf\" (UniqueName: \"kubernetes.io/projected/3a791e8a-cfda-4171-8b8b-1828dcae5419-kube-api-access-rjrpf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.204996 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.205016 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.223222 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjrpf\" (UniqueName: \"kubernetes.io/projected/3a791e8a-cfda-4171-8b8b-1828dcae5419-kube-api-access-rjrpf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.334019 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:17 crc kubenswrapper[4717]: I0218 12:01:17.572797 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd"] Feb 18 12:01:18 crc kubenswrapper[4717]: I0218 12:01:18.296812 4717 generic.go:334] "Generic (PLEG): container finished" podID="3a791e8a-cfda-4171-8b8b-1828dcae5419" containerID="90a30d3834970b0eca6bd5ce7ad29ed3dd6ba5b714770c1f89792afc26ee7dc7" exitCode=0 Feb 18 12:01:18 crc kubenswrapper[4717]: I0218 12:01:18.296942 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" event={"ID":"3a791e8a-cfda-4171-8b8b-1828dcae5419","Type":"ContainerDied","Data":"90a30d3834970b0eca6bd5ce7ad29ed3dd6ba5b714770c1f89792afc26ee7dc7"} Feb 18 12:01:18 crc kubenswrapper[4717]: I0218 12:01:18.297282 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" event={"ID":"3a791e8a-cfda-4171-8b8b-1828dcae5419","Type":"ContainerStarted","Data":"4356da782ffdaa75a3d6ee8a87b7c5196ce49c5c73736def8b229643a0a02b10"} Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.280900 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-mk6cn" podUID="3c042f74-11a5-46a9-bc05-6b3278428e36" containerName="console" containerID="cri-o://104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c" gracePeriod=15 Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.314278 4717 generic.go:334] "Generic (PLEG): container finished" podID="3a791e8a-cfda-4171-8b8b-1828dcae5419" containerID="054bd905643bd0cbb5d0e3254ff72222b8d03228789b7ebcaf229a3e0b3427e6" exitCode=0 Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.314321 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" event={"ID":"3a791e8a-cfda-4171-8b8b-1828dcae5419","Type":"ContainerDied","Data":"054bd905643bd0cbb5d0e3254ff72222b8d03228789b7ebcaf229a3e0b3427e6"} Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.719533 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mk6cn_3c042f74-11a5-46a9-bc05-6b3278428e36/console/0.log" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.719970 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.886242 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-service-ca\") pod \"3c042f74-11a5-46a9-bc05-6b3278428e36\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.886400 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ttck\" (UniqueName: \"kubernetes.io/projected/3c042f74-11a5-46a9-bc05-6b3278428e36-kube-api-access-8ttck\") pod \"3c042f74-11a5-46a9-bc05-6b3278428e36\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.886434 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-oauth-serving-cert\") pod \"3c042f74-11a5-46a9-bc05-6b3278428e36\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.886466 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-console-config\") pod \"3c042f74-11a5-46a9-bc05-6b3278428e36\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.886494 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-trusted-ca-bundle\") pod \"3c042f74-11a5-46a9-bc05-6b3278428e36\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.887468 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-service-ca" (OuterVolumeSpecName: "service-ca") pod "3c042f74-11a5-46a9-bc05-6b3278428e36" (UID: "3c042f74-11a5-46a9-bc05-6b3278428e36"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.887485 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3c042f74-11a5-46a9-bc05-6b3278428e36" (UID: "3c042f74-11a5-46a9-bc05-6b3278428e36"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.887657 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-console-config" (OuterVolumeSpecName: "console-config") pod "3c042f74-11a5-46a9-bc05-6b3278428e36" (UID: "3c042f74-11a5-46a9-bc05-6b3278428e36"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.887699 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3c042f74-11a5-46a9-bc05-6b3278428e36" (UID: "3c042f74-11a5-46a9-bc05-6b3278428e36"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.887742 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-serving-cert\") pod \"3c042f74-11a5-46a9-bc05-6b3278428e36\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.888020 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-oauth-config\") pod \"3c042f74-11a5-46a9-bc05-6b3278428e36\" (UID: \"3c042f74-11a5-46a9-bc05-6b3278428e36\") " Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.888705 4717 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.888734 4717 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.888751 4717 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.888766 4717 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c042f74-11a5-46a9-bc05-6b3278428e36-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.893510 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3c042f74-11a5-46a9-bc05-6b3278428e36" (UID: "3c042f74-11a5-46a9-bc05-6b3278428e36"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.893545 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c042f74-11a5-46a9-bc05-6b3278428e36-kube-api-access-8ttck" (OuterVolumeSpecName: "kube-api-access-8ttck") pod "3c042f74-11a5-46a9-bc05-6b3278428e36" (UID: "3c042f74-11a5-46a9-bc05-6b3278428e36"). InnerVolumeSpecName "kube-api-access-8ttck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.895037 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3c042f74-11a5-46a9-bc05-6b3278428e36" (UID: "3c042f74-11a5-46a9-bc05-6b3278428e36"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.990425 4717 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.990464 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ttck\" (UniqueName: \"kubernetes.io/projected/3c042f74-11a5-46a9-bc05-6b3278428e36-kube-api-access-8ttck\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:21 crc kubenswrapper[4717]: I0218 12:01:21.990479 4717 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c042f74-11a5-46a9-bc05-6b3278428e36-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.323053 4717 generic.go:334] "Generic (PLEG): container finished" podID="3a791e8a-cfda-4171-8b8b-1828dcae5419" containerID="b3eca215a7d7073a794f149f9cdb59b389ecffc382723cd188acd536068fdbba" exitCode=0 Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.323135 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" event={"ID":"3a791e8a-cfda-4171-8b8b-1828dcae5419","Type":"ContainerDied","Data":"b3eca215a7d7073a794f149f9cdb59b389ecffc382723cd188acd536068fdbba"} Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.324610 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mk6cn_3c042f74-11a5-46a9-bc05-6b3278428e36/console/0.log" Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.324678 4717 generic.go:334] "Generic (PLEG): container finished" podID="3c042f74-11a5-46a9-bc05-6b3278428e36" containerID="104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c" exitCode=2 Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.324719 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mk6cn" Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.324721 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mk6cn" event={"ID":"3c042f74-11a5-46a9-bc05-6b3278428e36","Type":"ContainerDied","Data":"104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c"} Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.324751 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mk6cn" event={"ID":"3c042f74-11a5-46a9-bc05-6b3278428e36","Type":"ContainerDied","Data":"9f582af729b31e6c94b846d87f672d40061403978546399974a7b1424feb79c7"} Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.324773 4717 scope.go:117] "RemoveContainer" containerID="104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c" Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.354134 4717 scope.go:117] "RemoveContainer" containerID="104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c" Feb 18 12:01:22 crc kubenswrapper[4717]: E0218 12:01:22.354724 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c\": container with ID starting with 104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c not found: ID does not exist" containerID="104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c" Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.354765 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c"} err="failed to get container status \"104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c\": rpc error: code = NotFound desc = could not find container \"104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c\": container with ID starting with 104d62000e8e612706ae70094d4f9eae50e00e8019ee9ff36af460491125577c not found: ID does not exist" Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.360786 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mk6cn"] Feb 18 12:01:22 crc kubenswrapper[4717]: I0218 12:01:22.366541 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-mk6cn"] Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.045916 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c042f74-11a5-46a9-bc05-6b3278428e36" path="/var/lib/kubelet/pods/3c042f74-11a5-46a9-bc05-6b3278428e36/volumes" Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.591428 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.713637 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-bundle\") pod \"3a791e8a-cfda-4171-8b8b-1828dcae5419\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.713762 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjrpf\" (UniqueName: \"kubernetes.io/projected/3a791e8a-cfda-4171-8b8b-1828dcae5419-kube-api-access-rjrpf\") pod \"3a791e8a-cfda-4171-8b8b-1828dcae5419\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.713815 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-util\") pod \"3a791e8a-cfda-4171-8b8b-1828dcae5419\" (UID: \"3a791e8a-cfda-4171-8b8b-1828dcae5419\") " Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.715254 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-bundle" (OuterVolumeSpecName: "bundle") pod "3a791e8a-cfda-4171-8b8b-1828dcae5419" (UID: "3a791e8a-cfda-4171-8b8b-1828dcae5419"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.719690 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a791e8a-cfda-4171-8b8b-1828dcae5419-kube-api-access-rjrpf" (OuterVolumeSpecName: "kube-api-access-rjrpf") pod "3a791e8a-cfda-4171-8b8b-1828dcae5419" (UID: "3a791e8a-cfda-4171-8b8b-1828dcae5419"). InnerVolumeSpecName "kube-api-access-rjrpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.723818 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-util" (OuterVolumeSpecName: "util") pod "3a791e8a-cfda-4171-8b8b-1828dcae5419" (UID: "3a791e8a-cfda-4171-8b8b-1828dcae5419"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.815415 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjrpf\" (UniqueName: \"kubernetes.io/projected/3a791e8a-cfda-4171-8b8b-1828dcae5419-kube-api-access-rjrpf\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.815495 4717 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-util\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:23 crc kubenswrapper[4717]: I0218 12:01:23.815514 4717 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a791e8a-cfda-4171-8b8b-1828dcae5419-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:01:24 crc kubenswrapper[4717]: I0218 12:01:24.341720 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" event={"ID":"3a791e8a-cfda-4171-8b8b-1828dcae5419","Type":"ContainerDied","Data":"4356da782ffdaa75a3d6ee8a87b7c5196ce49c5c73736def8b229643a0a02b10"} Feb 18 12:01:24 crc kubenswrapper[4717]: I0218 12:01:24.341765 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4356da782ffdaa75a3d6ee8a87b7c5196ce49c5c73736def8b229643a0a02b10" Feb 18 12:01:24 crc kubenswrapper[4717]: I0218 12:01:24.341775 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.989406 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s"] Feb 18 12:01:31 crc kubenswrapper[4717]: E0218 12:01:31.990486 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a791e8a-cfda-4171-8b8b-1828dcae5419" containerName="pull" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.990504 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a791e8a-cfda-4171-8b8b-1828dcae5419" containerName="pull" Feb 18 12:01:31 crc kubenswrapper[4717]: E0218 12:01:31.990529 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c042f74-11a5-46a9-bc05-6b3278428e36" containerName="console" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.990539 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c042f74-11a5-46a9-bc05-6b3278428e36" containerName="console" Feb 18 12:01:31 crc kubenswrapper[4717]: E0218 12:01:31.990552 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a791e8a-cfda-4171-8b8b-1828dcae5419" containerName="extract" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.990561 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a791e8a-cfda-4171-8b8b-1828dcae5419" containerName="extract" Feb 18 12:01:31 crc kubenswrapper[4717]: E0218 12:01:31.990575 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a791e8a-cfda-4171-8b8b-1828dcae5419" containerName="util" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.990582 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a791e8a-cfda-4171-8b8b-1828dcae5419" containerName="util" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.990723 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a791e8a-cfda-4171-8b8b-1828dcae5419" containerName="extract" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.990752 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c042f74-11a5-46a9-bc05-6b3278428e36" containerName="console" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.991292 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.994237 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.994462 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.994491 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 12:01:31 crc kubenswrapper[4717]: I0218 12:01:31.994665 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lxdds" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.005372 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.011286 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s"] Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.120468 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cc66c29-35b2-4c85-95d0-ad78febc48c8-webhook-cert\") pod \"metallb-operator-controller-manager-6ccf94b89b-k7n5s\" (UID: \"4cc66c29-35b2-4c85-95d0-ad78febc48c8\") " pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.120525 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jdkx\" (UniqueName: \"kubernetes.io/projected/4cc66c29-35b2-4c85-95d0-ad78febc48c8-kube-api-access-6jdkx\") pod \"metallb-operator-controller-manager-6ccf94b89b-k7n5s\" (UID: \"4cc66c29-35b2-4c85-95d0-ad78febc48c8\") " pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.120549 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cc66c29-35b2-4c85-95d0-ad78febc48c8-apiservice-cert\") pod \"metallb-operator-controller-manager-6ccf94b89b-k7n5s\" (UID: \"4cc66c29-35b2-4c85-95d0-ad78febc48c8\") " pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.221821 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cc66c29-35b2-4c85-95d0-ad78febc48c8-webhook-cert\") pod \"metallb-operator-controller-manager-6ccf94b89b-k7n5s\" (UID: \"4cc66c29-35b2-4c85-95d0-ad78febc48c8\") " pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.221888 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jdkx\" (UniqueName: \"kubernetes.io/projected/4cc66c29-35b2-4c85-95d0-ad78febc48c8-kube-api-access-6jdkx\") pod \"metallb-operator-controller-manager-6ccf94b89b-k7n5s\" (UID: \"4cc66c29-35b2-4c85-95d0-ad78febc48c8\") " pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.221927 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cc66c29-35b2-4c85-95d0-ad78febc48c8-apiservice-cert\") pod \"metallb-operator-controller-manager-6ccf94b89b-k7n5s\" (UID: \"4cc66c29-35b2-4c85-95d0-ad78febc48c8\") " pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.243528 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cc66c29-35b2-4c85-95d0-ad78febc48c8-webhook-cert\") pod \"metallb-operator-controller-manager-6ccf94b89b-k7n5s\" (UID: \"4cc66c29-35b2-4c85-95d0-ad78febc48c8\") " pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.243752 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cc66c29-35b2-4c85-95d0-ad78febc48c8-apiservice-cert\") pod \"metallb-operator-controller-manager-6ccf94b89b-k7n5s\" (UID: \"4cc66c29-35b2-4c85-95d0-ad78febc48c8\") " pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.246607 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jdkx\" (UniqueName: \"kubernetes.io/projected/4cc66c29-35b2-4c85-95d0-ad78febc48c8-kube-api-access-6jdkx\") pod \"metallb-operator-controller-manager-6ccf94b89b-k7n5s\" (UID: \"4cc66c29-35b2-4c85-95d0-ad78febc48c8\") " pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.315398 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.380446 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7"] Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.381881 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.384028 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-x45rc" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.385667 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.385861 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.399826 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7"] Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.526048 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5bf9065-9c80-484d-9700-dc484f20a071-apiservice-cert\") pod \"metallb-operator-webhook-server-fb5446db6-w9jm7\" (UID: \"d5bf9065-9c80-484d-9700-dc484f20a071\") " pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.526120 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5bf9065-9c80-484d-9700-dc484f20a071-webhook-cert\") pod \"metallb-operator-webhook-server-fb5446db6-w9jm7\" (UID: \"d5bf9065-9c80-484d-9700-dc484f20a071\") " pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.526160 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8cl\" (UniqueName: \"kubernetes.io/projected/d5bf9065-9c80-484d-9700-dc484f20a071-kube-api-access-fl8cl\") pod \"metallb-operator-webhook-server-fb5446db6-w9jm7\" (UID: \"d5bf9065-9c80-484d-9700-dc484f20a071\") " pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.628075 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5bf9065-9c80-484d-9700-dc484f20a071-apiservice-cert\") pod \"metallb-operator-webhook-server-fb5446db6-w9jm7\" (UID: \"d5bf9065-9c80-484d-9700-dc484f20a071\") " pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.628188 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5bf9065-9c80-484d-9700-dc484f20a071-webhook-cert\") pod \"metallb-operator-webhook-server-fb5446db6-w9jm7\" (UID: \"d5bf9065-9c80-484d-9700-dc484f20a071\") " pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.628228 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8cl\" (UniqueName: \"kubernetes.io/projected/d5bf9065-9c80-484d-9700-dc484f20a071-kube-api-access-fl8cl\") pod \"metallb-operator-webhook-server-fb5446db6-w9jm7\" (UID: \"d5bf9065-9c80-484d-9700-dc484f20a071\") " pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.636406 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5bf9065-9c80-484d-9700-dc484f20a071-apiservice-cert\") pod \"metallb-operator-webhook-server-fb5446db6-w9jm7\" (UID: \"d5bf9065-9c80-484d-9700-dc484f20a071\") " pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.637312 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5bf9065-9c80-484d-9700-dc484f20a071-webhook-cert\") pod \"metallb-operator-webhook-server-fb5446db6-w9jm7\" (UID: \"d5bf9065-9c80-484d-9700-dc484f20a071\") " pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.655745 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8cl\" (UniqueName: \"kubernetes.io/projected/d5bf9065-9c80-484d-9700-dc484f20a071-kube-api-access-fl8cl\") pod \"metallb-operator-webhook-server-fb5446db6-w9jm7\" (UID: \"d5bf9065-9c80-484d-9700-dc484f20a071\") " pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.672900 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s"] Feb 18 12:01:32 crc kubenswrapper[4717]: I0218 12:01:32.707414 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:33 crc kubenswrapper[4717]: I0218 12:01:33.231354 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7"] Feb 18 12:01:33 crc kubenswrapper[4717]: I0218 12:01:33.391723 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" event={"ID":"d5bf9065-9c80-484d-9700-dc484f20a071","Type":"ContainerStarted","Data":"84bf2acc035f3be65b301d4875346d18c8aa0cad6eab9e26d588909f28c9fb79"} Feb 18 12:01:33 crc kubenswrapper[4717]: I0218 12:01:33.392932 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" event={"ID":"4cc66c29-35b2-4c85-95d0-ad78febc48c8","Type":"ContainerStarted","Data":"f9743854be732091974cd730c980e65b9bfc64869a51f1e557e01204e93b30bd"} Feb 18 12:01:36 crc kubenswrapper[4717]: I0218 12:01:36.413549 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" event={"ID":"4cc66c29-35b2-4c85-95d0-ad78febc48c8","Type":"ContainerStarted","Data":"7065685f0df2a0e046609170e523c227a29cdd1645495ff7f13454995e8f2cf8"} Feb 18 12:01:36 crc kubenswrapper[4717]: I0218 12:01:36.414346 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:01:36 crc kubenswrapper[4717]: I0218 12:01:36.442933 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" podStartSLOduration=2.159438471 podStartE2EDuration="5.442915353s" podCreationTimestamp="2026-02-18 12:01:31 +0000 UTC" firstStartedPulling="2026-02-18 12:01:32.682909026 +0000 UTC m=+727.085010342" lastFinishedPulling="2026-02-18 12:01:35.966385908 +0000 UTC m=+730.368487224" observedRunningTime="2026-02-18 12:01:36.439488913 +0000 UTC m=+730.841590229" watchObservedRunningTime="2026-02-18 12:01:36.442915353 +0000 UTC m=+730.845016669" Feb 18 12:01:38 crc kubenswrapper[4717]: I0218 12:01:38.429287 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" event={"ID":"d5bf9065-9c80-484d-9700-dc484f20a071","Type":"ContainerStarted","Data":"0fddaef3bcc0ef055123b93731e3dad46c5c55cc7defca67dd4ea691126021d9"} Feb 18 12:01:38 crc kubenswrapper[4717]: I0218 12:01:38.429785 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:01:38 crc kubenswrapper[4717]: I0218 12:01:38.448677 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" podStartSLOduration=1.5536814030000001 podStartE2EDuration="6.448659052s" podCreationTimestamp="2026-02-18 12:01:32 +0000 UTC" firstStartedPulling="2026-02-18 12:01:33.272403152 +0000 UTC m=+727.674504468" lastFinishedPulling="2026-02-18 12:01:38.167380801 +0000 UTC m=+732.569482117" observedRunningTime="2026-02-18 12:01:38.446053257 +0000 UTC m=+732.848154583" watchObservedRunningTime="2026-02-18 12:01:38.448659052 +0000 UTC m=+732.850760368" Feb 18 12:01:52 crc kubenswrapper[4717]: I0218 12:01:52.713419 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-fb5446db6-w9jm7" Feb 18 12:02:03 crc kubenswrapper[4717]: I0218 12:02:03.502340 4717 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 12:02:12 crc kubenswrapper[4717]: I0218 12:02:12.319649 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6ccf94b89b-k7n5s" Feb 18 12:02:12 crc kubenswrapper[4717]: I0218 12:02:12.975415 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg"] Feb 18 12:02:12 crc kubenswrapper[4717]: I0218 12:02:12.977425 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:12 crc kubenswrapper[4717]: I0218 12:02:12.991044 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nc4j2" Feb 18 12:02:12 crc kubenswrapper[4717]: I0218 12:02:12.991409 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 12:02:12 crc kubenswrapper[4717]: I0218 12:02:12.995710 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hwtqk"] Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.011726 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.020889 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.021219 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.029566 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg"] Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.031468 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c7a04c5-e38e-41bf-9343-b567857783d6-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-xr4mg\" (UID: \"3c7a04c5-e38e-41bf-9343-b567857783d6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.031532 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbc8c\" (UniqueName: \"kubernetes.io/projected/3c7a04c5-e38e-41bf-9343-b567857783d6-kube-api-access-fbc8c\") pod \"frr-k8s-webhook-server-78b44bf5bb-xr4mg\" (UID: \"3c7a04c5-e38e-41bf-9343-b567857783d6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.133082 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-frr-startup\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.133138 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbc8c\" (UniqueName: \"kubernetes.io/projected/3c7a04c5-e38e-41bf-9343-b567857783d6-kube-api-access-fbc8c\") pod \"frr-k8s-webhook-server-78b44bf5bb-xr4mg\" (UID: \"3c7a04c5-e38e-41bf-9343-b567857783d6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.133179 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-metrics-certs\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.133205 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6lfb\" (UniqueName: \"kubernetes.io/projected/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-kube-api-access-d6lfb\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.133228 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-metrics\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.133249 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-frr-sockets\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.133267 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-frr-conf\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.133324 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c7a04c5-e38e-41bf-9343-b567857783d6-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-xr4mg\" (UID: \"3c7a04c5-e38e-41bf-9343-b567857783d6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.133343 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-reloader\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: E0218 12:02:13.134173 4717 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 18 12:02:13 crc kubenswrapper[4717]: E0218 12:02:13.134319 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c7a04c5-e38e-41bf-9343-b567857783d6-cert podName:3c7a04c5-e38e-41bf-9343-b567857783d6 nodeName:}" failed. No retries permitted until 2026-02-18 12:02:13.634291983 +0000 UTC m=+768.036393309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3c7a04c5-e38e-41bf-9343-b567857783d6-cert") pod "frr-k8s-webhook-server-78b44bf5bb-xr4mg" (UID: "3c7a04c5-e38e-41bf-9343-b567857783d6") : secret "frr-k8s-webhook-server-cert" not found Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.136171 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-vfwxr"] Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.137184 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.142743 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.142798 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8ppg8" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.142908 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.143122 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.173566 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-fbbnw"] Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.176204 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.181070 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-fbbnw"] Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.181520 4717 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.182217 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbc8c\" (UniqueName: \"kubernetes.io/projected/3c7a04c5-e38e-41bf-9343-b567857783d6-kube-api-access-fbc8c\") pod \"frr-k8s-webhook-server-78b44bf5bb-xr4mg\" (UID: \"3c7a04c5-e38e-41bf-9343-b567857783d6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.239341 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-metallb-excludel2\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.239683 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/060557a2-52b7-4e87-908f-0ea8b0febb4c-metrics-certs\") pod \"controller-69bbfbf88f-fbbnw\" (UID: \"060557a2-52b7-4e87-908f-0ea8b0febb4c\") " pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.239765 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-metrics-certs\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.239875 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-reloader\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.239960 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dfg6\" (UniqueName: \"kubernetes.io/projected/060557a2-52b7-4e87-908f-0ea8b0febb4c-kube-api-access-8dfg6\") pod \"controller-69bbfbf88f-fbbnw\" (UID: \"060557a2-52b7-4e87-908f-0ea8b0febb4c\") " pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.240042 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdsnc\" (UniqueName: \"kubernetes.io/projected/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-kube-api-access-sdsnc\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.240126 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-memberlist\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.240192 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/060557a2-52b7-4e87-908f-0ea8b0febb4c-cert\") pod \"controller-69bbfbf88f-fbbnw\" (UID: \"060557a2-52b7-4e87-908f-0ea8b0febb4c\") " pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.240288 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-frr-startup\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.240380 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-metrics-certs\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.240458 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6lfb\" (UniqueName: \"kubernetes.io/projected/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-kube-api-access-d6lfb\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.240527 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-metrics\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.240602 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-frr-sockets\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.240669 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-frr-conf\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.241145 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-frr-conf\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.241438 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-reloader\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.242264 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-frr-startup\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.244804 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-metrics\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.245456 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-frr-sockets\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.246984 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-metrics-certs\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.285418 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6lfb\" (UniqueName: \"kubernetes.io/projected/74dfbdfa-ea21-46dd-8dac-c8aac0050e51-kube-api-access-d6lfb\") pod \"frr-k8s-hwtqk\" (UID: \"74dfbdfa-ea21-46dd-8dac-c8aac0050e51\") " pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.341714 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dfg6\" (UniqueName: \"kubernetes.io/projected/060557a2-52b7-4e87-908f-0ea8b0febb4c-kube-api-access-8dfg6\") pod \"controller-69bbfbf88f-fbbnw\" (UID: \"060557a2-52b7-4e87-908f-0ea8b0febb4c\") " pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.341779 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdsnc\" (UniqueName: \"kubernetes.io/projected/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-kube-api-access-sdsnc\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.341819 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-memberlist\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.341837 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/060557a2-52b7-4e87-908f-0ea8b0febb4c-cert\") pod \"controller-69bbfbf88f-fbbnw\" (UID: \"060557a2-52b7-4e87-908f-0ea8b0febb4c\") " pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.341880 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-metallb-excludel2\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.341906 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/060557a2-52b7-4e87-908f-0ea8b0febb4c-metrics-certs\") pod \"controller-69bbfbf88f-fbbnw\" (UID: \"060557a2-52b7-4e87-908f-0ea8b0febb4c\") " pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.341924 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-metrics-certs\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: E0218 12:02:13.342120 4717 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 18 12:02:13 crc kubenswrapper[4717]: E0218 12:02:13.342188 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-metrics-certs podName:d668cbc3-c191-43fc-bb6f-64f4b7bdb969 nodeName:}" failed. No retries permitted until 2026-02-18 12:02:13.842165442 +0000 UTC m=+768.244266758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-metrics-certs") pod "speaker-vfwxr" (UID: "d668cbc3-c191-43fc-bb6f-64f4b7bdb969") : secret "speaker-certs-secret" not found Feb 18 12:02:13 crc kubenswrapper[4717]: E0218 12:02:13.342914 4717 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 18 12:02:13 crc kubenswrapper[4717]: E0218 12:02:13.342979 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/060557a2-52b7-4e87-908f-0ea8b0febb4c-metrics-certs podName:060557a2-52b7-4e87-908f-0ea8b0febb4c nodeName:}" failed. No retries permitted until 2026-02-18 12:02:13.842963876 +0000 UTC m=+768.245065202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/060557a2-52b7-4e87-908f-0ea8b0febb4c-metrics-certs") pod "controller-69bbfbf88f-fbbnw" (UID: "060557a2-52b7-4e87-908f-0ea8b0febb4c") : secret "controller-certs-secret" not found Feb 18 12:02:13 crc kubenswrapper[4717]: E0218 12:02:13.343019 4717 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.343117 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-metallb-excludel2\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: E0218 12:02:13.343141 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-memberlist podName:d668cbc3-c191-43fc-bb6f-64f4b7bdb969 nodeName:}" failed. No retries permitted until 2026-02-18 12:02:13.84310765 +0000 UTC m=+768.245208966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-memberlist") pod "speaker-vfwxr" (UID: "d668cbc3-c191-43fc-bb6f-64f4b7bdb969") : secret "metallb-memberlist" not found Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.346583 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/060557a2-52b7-4e87-908f-0ea8b0febb4c-cert\") pod \"controller-69bbfbf88f-fbbnw\" (UID: \"060557a2-52b7-4e87-908f-0ea8b0febb4c\") " pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.367734 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dfg6\" (UniqueName: \"kubernetes.io/projected/060557a2-52b7-4e87-908f-0ea8b0febb4c-kube-api-access-8dfg6\") pod \"controller-69bbfbf88f-fbbnw\" (UID: \"060557a2-52b7-4e87-908f-0ea8b0febb4c\") " pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.369567 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.371627 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdsnc\" (UniqueName: \"kubernetes.io/projected/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-kube-api-access-sdsnc\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.642495 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwtqk" event={"ID":"74dfbdfa-ea21-46dd-8dac-c8aac0050e51","Type":"ContainerStarted","Data":"4d5841a96d62e99f251edb5a2fb1a40a14b5a2df92d13fb502f462a3af5af1d9"} Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.646449 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c7a04c5-e38e-41bf-9343-b567857783d6-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-xr4mg\" (UID: \"3c7a04c5-e38e-41bf-9343-b567857783d6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.650938 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3c7a04c5-e38e-41bf-9343-b567857783d6-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-xr4mg\" (UID: \"3c7a04c5-e38e-41bf-9343-b567857783d6\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.850201 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/060557a2-52b7-4e87-908f-0ea8b0febb4c-metrics-certs\") pod \"controller-69bbfbf88f-fbbnw\" (UID: \"060557a2-52b7-4e87-908f-0ea8b0febb4c\") " pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.850322 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-metrics-certs\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.850396 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-memberlist\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: E0218 12:02:13.850567 4717 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 12:02:13 crc kubenswrapper[4717]: E0218 12:02:13.850655 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-memberlist podName:d668cbc3-c191-43fc-bb6f-64f4b7bdb969 nodeName:}" failed. No retries permitted until 2026-02-18 12:02:14.850631574 +0000 UTC m=+769.252732890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-memberlist") pod "speaker-vfwxr" (UID: "d668cbc3-c191-43fc-bb6f-64f4b7bdb969") : secret "metallb-memberlist" not found Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.853795 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-metrics-certs\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.853928 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/060557a2-52b7-4e87-908f-0ea8b0febb4c-metrics-certs\") pod \"controller-69bbfbf88f-fbbnw\" (UID: \"060557a2-52b7-4e87-908f-0ea8b0febb4c\") " pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:13 crc kubenswrapper[4717]: I0218 12:02:13.910566 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:14 crc kubenswrapper[4717]: I0218 12:02:14.127565 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:14 crc kubenswrapper[4717]: I0218 12:02:14.153974 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg"] Feb 18 12:02:14 crc kubenswrapper[4717]: W0218 12:02:14.167043 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c7a04c5_e38e_41bf_9343_b567857783d6.slice/crio-e97f7fa132d7421b66b31b2f3159322abace064d7ed6a9a9e07fb2d06f1b48fb WatchSource:0}: Error finding container e97f7fa132d7421b66b31b2f3159322abace064d7ed6a9a9e07fb2d06f1b48fb: Status 404 returned error can't find the container with id e97f7fa132d7421b66b31b2f3159322abace064d7ed6a9a9e07fb2d06f1b48fb Feb 18 12:02:14 crc kubenswrapper[4717]: I0218 12:02:14.351656 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-fbbnw"] Feb 18 12:02:14 crc kubenswrapper[4717]: I0218 12:02:14.650138 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" event={"ID":"3c7a04c5-e38e-41bf-9343-b567857783d6","Type":"ContainerStarted","Data":"e97f7fa132d7421b66b31b2f3159322abace064d7ed6a9a9e07fb2d06f1b48fb"} Feb 18 12:02:14 crc kubenswrapper[4717]: I0218 12:02:14.651788 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-fbbnw" event={"ID":"060557a2-52b7-4e87-908f-0ea8b0febb4c","Type":"ContainerStarted","Data":"29c666f032573db39076c2e13b42c206a5d1946757f0c43b7187fb060ceaf55d"} Feb 18 12:02:14 crc kubenswrapper[4717]: I0218 12:02:14.651821 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-fbbnw" event={"ID":"060557a2-52b7-4e87-908f-0ea8b0febb4c","Type":"ContainerStarted","Data":"6fe92f48bff20f83c11d776ff7513e10b40192d0058da06577b4ef60bc624c7d"} Feb 18 12:02:14 crc kubenswrapper[4717]: I0218 12:02:14.866579 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-memberlist\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:14 crc kubenswrapper[4717]: I0218 12:02:14.873751 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d668cbc3-c191-43fc-bb6f-64f4b7bdb969-memberlist\") pod \"speaker-vfwxr\" (UID: \"d668cbc3-c191-43fc-bb6f-64f4b7bdb969\") " pod="metallb-system/speaker-vfwxr" Feb 18 12:02:14 crc kubenswrapper[4717]: I0218 12:02:14.960092 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vfwxr" Feb 18 12:02:14 crc kubenswrapper[4717]: W0218 12:02:14.983756 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd668cbc3_c191_43fc_bb6f_64f4b7bdb969.slice/crio-d6609b1c3f8d771c6f06a931e2e35c7246c2bc2b62fd9fe2dbda090d3571fdfd WatchSource:0}: Error finding container d6609b1c3f8d771c6f06a931e2e35c7246c2bc2b62fd9fe2dbda090d3571fdfd: Status 404 returned error can't find the container with id d6609b1c3f8d771c6f06a931e2e35c7246c2bc2b62fd9fe2dbda090d3571fdfd Feb 18 12:02:15 crc kubenswrapper[4717]: I0218 12:02:15.668324 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-fbbnw" event={"ID":"060557a2-52b7-4e87-908f-0ea8b0febb4c","Type":"ContainerStarted","Data":"dcb1fafa62d140970250bd42ce8499c97ffc4e8c4cfeaa7fa36cf6ef6a793882"} Feb 18 12:02:15 crc kubenswrapper[4717]: I0218 12:02:15.668822 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:15 crc kubenswrapper[4717]: I0218 12:02:15.682663 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vfwxr" event={"ID":"d668cbc3-c191-43fc-bb6f-64f4b7bdb969","Type":"ContainerStarted","Data":"37335bb1d041c063a1b9b59c3a19074e31a21a9da9df8ad29bc1acdc748949bf"} Feb 18 12:02:15 crc kubenswrapper[4717]: I0218 12:02:15.682716 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vfwxr" event={"ID":"d668cbc3-c191-43fc-bb6f-64f4b7bdb969","Type":"ContainerStarted","Data":"4cac3cb8753893c488014b000a8044b1b83e71451ab9c107762d7f76b65a84dc"} Feb 18 12:02:15 crc kubenswrapper[4717]: I0218 12:02:15.682726 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vfwxr" event={"ID":"d668cbc3-c191-43fc-bb6f-64f4b7bdb969","Type":"ContainerStarted","Data":"d6609b1c3f8d771c6f06a931e2e35c7246c2bc2b62fd9fe2dbda090d3571fdfd"} Feb 18 12:02:15 crc kubenswrapper[4717]: I0218 12:02:15.683072 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-vfwxr" Feb 18 12:02:15 crc kubenswrapper[4717]: I0218 12:02:15.705258 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-fbbnw" podStartSLOduration=2.7052357430000002 podStartE2EDuration="2.705235743s" podCreationTimestamp="2026-02-18 12:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:02:15.7030583 +0000 UTC m=+770.105159626" watchObservedRunningTime="2026-02-18 12:02:15.705235743 +0000 UTC m=+770.107337069" Feb 18 12:02:15 crc kubenswrapper[4717]: I0218 12:02:15.756577 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-vfwxr" podStartSLOduration=2.756542924 podStartE2EDuration="2.756542924s" podCreationTimestamp="2026-02-18 12:02:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:02:15.752320151 +0000 UTC m=+770.154421467" watchObservedRunningTime="2026-02-18 12:02:15.756542924 +0000 UTC m=+770.158644240" Feb 18 12:02:21 crc kubenswrapper[4717]: I0218 12:02:21.726687 4717 generic.go:334] "Generic (PLEG): container finished" podID="74dfbdfa-ea21-46dd-8dac-c8aac0050e51" containerID="a7284a87aad0710917bcf488adca5c3cdde0bd8530317b3ab81bf3fa72826dc9" exitCode=0 Feb 18 12:02:21 crc kubenswrapper[4717]: I0218 12:02:21.727090 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwtqk" event={"ID":"74dfbdfa-ea21-46dd-8dac-c8aac0050e51","Type":"ContainerDied","Data":"a7284a87aad0710917bcf488adca5c3cdde0bd8530317b3ab81bf3fa72826dc9"} Feb 18 12:02:21 crc kubenswrapper[4717]: I0218 12:02:21.730044 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" event={"ID":"3c7a04c5-e38e-41bf-9343-b567857783d6","Type":"ContainerStarted","Data":"b2b6ed2b24755344f5cb366d4cc99709deb466d5e69bdec7f6bb74812679e38f"} Feb 18 12:02:21 crc kubenswrapper[4717]: I0218 12:02:21.730226 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:21 crc kubenswrapper[4717]: I0218 12:02:21.785704 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" podStartSLOduration=2.682253625 podStartE2EDuration="9.785669364s" podCreationTimestamp="2026-02-18 12:02:12 +0000 UTC" firstStartedPulling="2026-02-18 12:02:14.169188489 +0000 UTC m=+768.571289805" lastFinishedPulling="2026-02-18 12:02:21.272604228 +0000 UTC m=+775.674705544" observedRunningTime="2026-02-18 12:02:21.782225384 +0000 UTC m=+776.184326710" watchObservedRunningTime="2026-02-18 12:02:21.785669364 +0000 UTC m=+776.187770680" Feb 18 12:02:22 crc kubenswrapper[4717]: I0218 12:02:22.740297 4717 generic.go:334] "Generic (PLEG): container finished" podID="74dfbdfa-ea21-46dd-8dac-c8aac0050e51" containerID="0cbcba5d8610988e2ff65534af62dd6df981cf1e8663b71b1e9b4ebfc2fc2787" exitCode=0 Feb 18 12:02:22 crc kubenswrapper[4717]: I0218 12:02:22.740420 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwtqk" event={"ID":"74dfbdfa-ea21-46dd-8dac-c8aac0050e51","Type":"ContainerDied","Data":"0cbcba5d8610988e2ff65534af62dd6df981cf1e8663b71b1e9b4ebfc2fc2787"} Feb 18 12:02:23 crc kubenswrapper[4717]: I0218 12:02:23.749169 4717 generic.go:334] "Generic (PLEG): container finished" podID="74dfbdfa-ea21-46dd-8dac-c8aac0050e51" containerID="75c71dd77fae2f60dc13f794bb125654680a63f50544a8326accb754eb7f81df" exitCode=0 Feb 18 12:02:23 crc kubenswrapper[4717]: I0218 12:02:23.749243 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwtqk" event={"ID":"74dfbdfa-ea21-46dd-8dac-c8aac0050e51","Type":"ContainerDied","Data":"75c71dd77fae2f60dc13f794bb125654680a63f50544a8326accb754eb7f81df"} Feb 18 12:02:24 crc kubenswrapper[4717]: I0218 12:02:24.134402 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-fbbnw" Feb 18 12:02:24 crc kubenswrapper[4717]: I0218 12:02:24.762347 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwtqk" event={"ID":"74dfbdfa-ea21-46dd-8dac-c8aac0050e51","Type":"ContainerStarted","Data":"4054d39c310a3e7201ac0b27980ea977cde5a6fbf5564be04a6156579254868f"} Feb 18 12:02:24 crc kubenswrapper[4717]: I0218 12:02:24.762404 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwtqk" event={"ID":"74dfbdfa-ea21-46dd-8dac-c8aac0050e51","Type":"ContainerStarted","Data":"7afd4b896c9e462549a8ad3e975ad68e8d8fb2c0cb2f4c21aae02937d0b70eeb"} Feb 18 12:02:24 crc kubenswrapper[4717]: I0218 12:02:24.762417 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwtqk" event={"ID":"74dfbdfa-ea21-46dd-8dac-c8aac0050e51","Type":"ContainerStarted","Data":"aaf42d1c5fd9c8d52937105c5887fd05fe99cf8f1b6c12aee96974eaf3cf6fb1"} Feb 18 12:02:24 crc kubenswrapper[4717]: I0218 12:02:24.762430 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwtqk" event={"ID":"74dfbdfa-ea21-46dd-8dac-c8aac0050e51","Type":"ContainerStarted","Data":"96b46caa09ad759d857b4b81807fe547c87aba0ab81445798390383c38816994"} Feb 18 12:02:24 crc kubenswrapper[4717]: I0218 12:02:24.762442 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwtqk" event={"ID":"74dfbdfa-ea21-46dd-8dac-c8aac0050e51","Type":"ContainerStarted","Data":"78ffa89b4595ce43acb41bc3be107a76fa45318db36b12ada382082bd6f70468"} Feb 18 12:02:25 crc kubenswrapper[4717]: I0218 12:02:25.773730 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hwtqk" event={"ID":"74dfbdfa-ea21-46dd-8dac-c8aac0050e51","Type":"ContainerStarted","Data":"282e25696e4ac4aaff9ea72ca4e3d35a945c238773b6343fa3d6a6f8be80ff26"} Feb 18 12:02:25 crc kubenswrapper[4717]: I0218 12:02:25.774195 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:28 crc kubenswrapper[4717]: I0218 12:02:28.370848 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:28 crc kubenswrapper[4717]: I0218 12:02:28.440901 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:28 crc kubenswrapper[4717]: I0218 12:02:28.470052 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hwtqk" podStartSLOduration=8.709170772 podStartE2EDuration="16.470029792s" podCreationTimestamp="2026-02-18 12:02:12 +0000 UTC" firstStartedPulling="2026-02-18 12:02:13.490664876 +0000 UTC m=+767.892766192" lastFinishedPulling="2026-02-18 12:02:21.251523896 +0000 UTC m=+775.653625212" observedRunningTime="2026-02-18 12:02:25.803639293 +0000 UTC m=+780.205740619" watchObservedRunningTime="2026-02-18 12:02:28.470029792 +0000 UTC m=+782.872131098" Feb 18 12:02:33 crc kubenswrapper[4717]: I0218 12:02:33.373969 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hwtqk" Feb 18 12:02:33 crc kubenswrapper[4717]: I0218 12:02:33.917567 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-xr4mg" Feb 18 12:02:34 crc kubenswrapper[4717]: I0218 12:02:34.965636 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-vfwxr" Feb 18 12:02:37 crc kubenswrapper[4717]: I0218 12:02:37.785528 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v98r7"] Feb 18 12:02:37 crc kubenswrapper[4717]: I0218 12:02:37.787066 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v98r7" Feb 18 12:02:37 crc kubenswrapper[4717]: I0218 12:02:37.801032 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 12:02:37 crc kubenswrapper[4717]: I0218 12:02:37.801464 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 12:02:37 crc kubenswrapper[4717]: I0218 12:02:37.801642 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-b888w" Feb 18 12:02:37 crc kubenswrapper[4717]: I0218 12:02:37.852912 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v98r7"] Feb 18 12:02:37 crc kubenswrapper[4717]: I0218 12:02:37.979739 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv47w\" (UniqueName: \"kubernetes.io/projected/c83f05c7-8eea-4758-a4be-7d8c0206ee1b-kube-api-access-kv47w\") pod \"openstack-operator-index-v98r7\" (UID: \"c83f05c7-8eea-4758-a4be-7d8c0206ee1b\") " pod="openstack-operators/openstack-operator-index-v98r7" Feb 18 12:02:38 crc kubenswrapper[4717]: I0218 12:02:38.081381 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv47w\" (UniqueName: \"kubernetes.io/projected/c83f05c7-8eea-4758-a4be-7d8c0206ee1b-kube-api-access-kv47w\") pod \"openstack-operator-index-v98r7\" (UID: \"c83f05c7-8eea-4758-a4be-7d8c0206ee1b\") " pod="openstack-operators/openstack-operator-index-v98r7" Feb 18 12:02:38 crc kubenswrapper[4717]: I0218 12:02:38.101908 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv47w\" (UniqueName: \"kubernetes.io/projected/c83f05c7-8eea-4758-a4be-7d8c0206ee1b-kube-api-access-kv47w\") pod \"openstack-operator-index-v98r7\" (UID: \"c83f05c7-8eea-4758-a4be-7d8c0206ee1b\") " pod="openstack-operators/openstack-operator-index-v98r7" Feb 18 12:02:38 crc kubenswrapper[4717]: I0218 12:02:38.115631 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v98r7" Feb 18 12:02:38 crc kubenswrapper[4717]: I0218 12:02:38.509029 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v98r7"] Feb 18 12:02:38 crc kubenswrapper[4717]: I0218 12:02:38.870473 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v98r7" event={"ID":"c83f05c7-8eea-4758-a4be-7d8c0206ee1b","Type":"ContainerStarted","Data":"50229338af788933f2d1669ac32f852edefc2ec68d13a426ece81a2f199b23be"} Feb 18 12:02:40 crc kubenswrapper[4717]: I0218 12:02:40.359690 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-v98r7"] Feb 18 12:02:40 crc kubenswrapper[4717]: I0218 12:02:40.885504 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v98r7" event={"ID":"c83f05c7-8eea-4758-a4be-7d8c0206ee1b","Type":"ContainerStarted","Data":"e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52"} Feb 18 12:02:40 crc kubenswrapper[4717]: I0218 12:02:40.885650 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-v98r7" podUID="c83f05c7-8eea-4758-a4be-7d8c0206ee1b" containerName="registry-server" containerID="cri-o://e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52" gracePeriod=2 Feb 18 12:02:40 crc kubenswrapper[4717]: I0218 12:02:40.910807 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v98r7" podStartSLOduration=1.952862259 podStartE2EDuration="3.910786467s" podCreationTimestamp="2026-02-18 12:02:37 +0000 UTC" firstStartedPulling="2026-02-18 12:02:38.514845393 +0000 UTC m=+792.916946709" lastFinishedPulling="2026-02-18 12:02:40.472769601 +0000 UTC m=+794.874870917" observedRunningTime="2026-02-18 12:02:40.907735126 +0000 UTC m=+795.309836442" watchObservedRunningTime="2026-02-18 12:02:40.910786467 +0000 UTC m=+795.312887783" Feb 18 12:02:40 crc kubenswrapper[4717]: I0218 12:02:40.966749 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6gp6j"] Feb 18 12:02:40 crc kubenswrapper[4717]: I0218 12:02:40.967737 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6gp6j" Feb 18 12:02:40 crc kubenswrapper[4717]: I0218 12:02:40.974741 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6gp6j"] Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.123147 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4tnh\" (UniqueName: \"kubernetes.io/projected/0b3c9586-d52a-4df4-a96f-91773c3bfbfa-kube-api-access-b4tnh\") pod \"openstack-operator-index-6gp6j\" (UID: \"0b3c9586-d52a-4df4-a96f-91773c3bfbfa\") " pod="openstack-operators/openstack-operator-index-6gp6j" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.224411 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4tnh\" (UniqueName: \"kubernetes.io/projected/0b3c9586-d52a-4df4-a96f-91773c3bfbfa-kube-api-access-b4tnh\") pod \"openstack-operator-index-6gp6j\" (UID: \"0b3c9586-d52a-4df4-a96f-91773c3bfbfa\") " pod="openstack-operators/openstack-operator-index-6gp6j" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.244909 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4tnh\" (UniqueName: \"kubernetes.io/projected/0b3c9586-d52a-4df4-a96f-91773c3bfbfa-kube-api-access-b4tnh\") pod \"openstack-operator-index-6gp6j\" (UID: \"0b3c9586-d52a-4df4-a96f-91773c3bfbfa\") " pod="openstack-operators/openstack-operator-index-6gp6j" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.283504 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v98r7" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.317904 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6gp6j" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.427186 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv47w\" (UniqueName: \"kubernetes.io/projected/c83f05c7-8eea-4758-a4be-7d8c0206ee1b-kube-api-access-kv47w\") pod \"c83f05c7-8eea-4758-a4be-7d8c0206ee1b\" (UID: \"c83f05c7-8eea-4758-a4be-7d8c0206ee1b\") " Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.456741 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83f05c7-8eea-4758-a4be-7d8c0206ee1b-kube-api-access-kv47w" (OuterVolumeSpecName: "kube-api-access-kv47w") pod "c83f05c7-8eea-4758-a4be-7d8c0206ee1b" (UID: "c83f05c7-8eea-4758-a4be-7d8c0206ee1b"). InnerVolumeSpecName "kube-api-access-kv47w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.529370 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv47w\" (UniqueName: \"kubernetes.io/projected/c83f05c7-8eea-4758-a4be-7d8c0206ee1b-kube-api-access-kv47w\") on node \"crc\" DevicePath \"\"" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.769355 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6gp6j"] Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.894546 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6gp6j" event={"ID":"0b3c9586-d52a-4df4-a96f-91773c3bfbfa","Type":"ContainerStarted","Data":"3678a65c8c43a7f597bc4982cf11986da706627843868dcdc065b18035b75267"} Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.896008 4717 generic.go:334] "Generic (PLEG): container finished" podID="c83f05c7-8eea-4758-a4be-7d8c0206ee1b" containerID="e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52" exitCode=0 Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.896046 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v98r7" event={"ID":"c83f05c7-8eea-4758-a4be-7d8c0206ee1b","Type":"ContainerDied","Data":"e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52"} Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.896076 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v98r7" event={"ID":"c83f05c7-8eea-4758-a4be-7d8c0206ee1b","Type":"ContainerDied","Data":"50229338af788933f2d1669ac32f852edefc2ec68d13a426ece81a2f199b23be"} Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.896099 4717 scope.go:117] "RemoveContainer" containerID="e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.896112 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v98r7" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.923923 4717 scope.go:117] "RemoveContainer" containerID="e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52" Feb 18 12:02:41 crc kubenswrapper[4717]: E0218 12:02:41.924941 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52\": container with ID starting with e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52 not found: ID does not exist" containerID="e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.924989 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52"} err="failed to get container status \"e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52\": rpc error: code = NotFound desc = could not find container \"e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52\": container with ID starting with e924b43eace501040f7d3e60842608c2051df745a56dabfd2c072f9f5c60cb52 not found: ID does not exist" Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.934901 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-v98r7"] Feb 18 12:02:41 crc kubenswrapper[4717]: I0218 12:02:41.940944 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-v98r7"] Feb 18 12:02:42 crc kubenswrapper[4717]: I0218 12:02:42.772988 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:02:42 crc kubenswrapper[4717]: I0218 12:02:42.773315 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:02:42 crc kubenswrapper[4717]: I0218 12:02:42.902550 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6gp6j" event={"ID":"0b3c9586-d52a-4df4-a96f-91773c3bfbfa","Type":"ContainerStarted","Data":"c4d163e55cceafd06a8d9228654218e886cc678d46abfffcee064356b2e2cd77"} Feb 18 12:02:42 crc kubenswrapper[4717]: I0218 12:02:42.926858 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6gp6j" podStartSLOduration=2.869628783 podStartE2EDuration="2.926837409s" podCreationTimestamp="2026-02-18 12:02:40 +0000 UTC" firstStartedPulling="2026-02-18 12:02:41.779801002 +0000 UTC m=+796.181902318" lastFinishedPulling="2026-02-18 12:02:41.837009608 +0000 UTC m=+796.239110944" observedRunningTime="2026-02-18 12:02:42.918077558 +0000 UTC m=+797.320178894" watchObservedRunningTime="2026-02-18 12:02:42.926837409 +0000 UTC m=+797.328938725" Feb 18 12:02:43 crc kubenswrapper[4717]: I0218 12:02:43.044017 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83f05c7-8eea-4758-a4be-7d8c0206ee1b" path="/var/lib/kubelet/pods/c83f05c7-8eea-4758-a4be-7d8c0206ee1b/volumes" Feb 18 12:02:51 crc kubenswrapper[4717]: I0218 12:02:51.318968 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6gp6j" Feb 18 12:02:51 crc kubenswrapper[4717]: I0218 12:02:51.319719 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6gp6j" Feb 18 12:02:51 crc kubenswrapper[4717]: I0218 12:02:51.351707 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6gp6j" Feb 18 12:02:51 crc kubenswrapper[4717]: I0218 12:02:51.983410 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6gp6j" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.405297 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8"] Feb 18 12:02:53 crc kubenswrapper[4717]: E0218 12:02:53.405953 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83f05c7-8eea-4758-a4be-7d8c0206ee1b" containerName="registry-server" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.405969 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83f05c7-8eea-4758-a4be-7d8c0206ee1b" containerName="registry-server" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.406114 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83f05c7-8eea-4758-a4be-7d8c0206ee1b" containerName="registry-server" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.407082 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.409321 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-k2b68" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.425600 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8"] Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.589371 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxg8t\" (UniqueName: \"kubernetes.io/projected/8a937a28-c870-4dde-a3ee-ebb15180d623-kube-api-access-qxg8t\") pod \"556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.589553 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-util\") pod \"556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.589641 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-bundle\") pod \"556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.690613 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-bundle\") pod \"556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.690685 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxg8t\" (UniqueName: \"kubernetes.io/projected/8a937a28-c870-4dde-a3ee-ebb15180d623-kube-api-access-qxg8t\") pod \"556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.690715 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-util\") pod \"556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.691282 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-util\") pod \"556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.691295 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-bundle\") pod \"556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.712740 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxg8t\" (UniqueName: \"kubernetes.io/projected/8a937a28-c870-4dde-a3ee-ebb15180d623-kube-api-access-qxg8t\") pod \"556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:53 crc kubenswrapper[4717]: I0218 12:02:53.726367 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:54 crc kubenswrapper[4717]: I0218 12:02:54.195762 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8"] Feb 18 12:02:54 crc kubenswrapper[4717]: I0218 12:02:54.981078 4717 generic.go:334] "Generic (PLEG): container finished" podID="8a937a28-c870-4dde-a3ee-ebb15180d623" containerID="1a408828f5674bd46ff99d9be23d1d91cff0abb064190c5c852034240daae908" exitCode=0 Feb 18 12:02:54 crc kubenswrapper[4717]: I0218 12:02:54.981532 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" event={"ID":"8a937a28-c870-4dde-a3ee-ebb15180d623","Type":"ContainerDied","Data":"1a408828f5674bd46ff99d9be23d1d91cff0abb064190c5c852034240daae908"} Feb 18 12:02:54 crc kubenswrapper[4717]: I0218 12:02:54.981593 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" event={"ID":"8a937a28-c870-4dde-a3ee-ebb15180d623","Type":"ContainerStarted","Data":"3e56e856d40eb6b240c2a85f86a879f7b954ebf6bd9af8da7026d42d3fd79217"} Feb 18 12:02:55 crc kubenswrapper[4717]: I0218 12:02:55.991376 4717 generic.go:334] "Generic (PLEG): container finished" podID="8a937a28-c870-4dde-a3ee-ebb15180d623" containerID="346ddf3c1247a8822839cfdf475832691e37a813c4e03f2c16f128ab64dee84f" exitCode=0 Feb 18 12:02:55 crc kubenswrapper[4717]: I0218 12:02:55.991506 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" event={"ID":"8a937a28-c870-4dde-a3ee-ebb15180d623","Type":"ContainerDied","Data":"346ddf3c1247a8822839cfdf475832691e37a813c4e03f2c16f128ab64dee84f"} Feb 18 12:02:57 crc kubenswrapper[4717]: I0218 12:02:57.000584 4717 generic.go:334] "Generic (PLEG): container finished" podID="8a937a28-c870-4dde-a3ee-ebb15180d623" containerID="6c21d82505b35dc4676526de2c7bdcde6952efa64cee058eb5d2ec4c12057bfa" exitCode=0 Feb 18 12:02:57 crc kubenswrapper[4717]: I0218 12:02:57.000674 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" event={"ID":"8a937a28-c870-4dde-a3ee-ebb15180d623","Type":"ContainerDied","Data":"6c21d82505b35dc4676526de2c7bdcde6952efa64cee058eb5d2ec4c12057bfa"} Feb 18 12:02:58 crc kubenswrapper[4717]: I0218 12:02:58.275319 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:02:58 crc kubenswrapper[4717]: I0218 12:02:58.355290 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-util\") pod \"8a937a28-c870-4dde-a3ee-ebb15180d623\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " Feb 18 12:02:58 crc kubenswrapper[4717]: I0218 12:02:58.355338 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxg8t\" (UniqueName: \"kubernetes.io/projected/8a937a28-c870-4dde-a3ee-ebb15180d623-kube-api-access-qxg8t\") pod \"8a937a28-c870-4dde-a3ee-ebb15180d623\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " Feb 18 12:02:58 crc kubenswrapper[4717]: I0218 12:02:58.355385 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-bundle\") pod \"8a937a28-c870-4dde-a3ee-ebb15180d623\" (UID: \"8a937a28-c870-4dde-a3ee-ebb15180d623\") " Feb 18 12:02:58 crc kubenswrapper[4717]: I0218 12:02:58.356249 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-bundle" (OuterVolumeSpecName: "bundle") pod "8a937a28-c870-4dde-a3ee-ebb15180d623" (UID: "8a937a28-c870-4dde-a3ee-ebb15180d623"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:02:58 crc kubenswrapper[4717]: I0218 12:02:58.356488 4717 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:02:58 crc kubenswrapper[4717]: I0218 12:02:58.363290 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a937a28-c870-4dde-a3ee-ebb15180d623-kube-api-access-qxg8t" (OuterVolumeSpecName: "kube-api-access-qxg8t") pod "8a937a28-c870-4dde-a3ee-ebb15180d623" (UID: "8a937a28-c870-4dde-a3ee-ebb15180d623"). InnerVolumeSpecName "kube-api-access-qxg8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:02:58 crc kubenswrapper[4717]: I0218 12:02:58.372072 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-util" (OuterVolumeSpecName: "util") pod "8a937a28-c870-4dde-a3ee-ebb15180d623" (UID: "8a937a28-c870-4dde-a3ee-ebb15180d623"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:02:58 crc kubenswrapper[4717]: I0218 12:02:58.457091 4717 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a937a28-c870-4dde-a3ee-ebb15180d623-util\") on node \"crc\" DevicePath \"\"" Feb 18 12:02:58 crc kubenswrapper[4717]: I0218 12:02:58.457119 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxg8t\" (UniqueName: \"kubernetes.io/projected/8a937a28-c870-4dde-a3ee-ebb15180d623-kube-api-access-qxg8t\") on node \"crc\" DevicePath \"\"" Feb 18 12:02:59 crc kubenswrapper[4717]: I0218 12:02:59.019391 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" event={"ID":"8a937a28-c870-4dde-a3ee-ebb15180d623","Type":"ContainerDied","Data":"3e56e856d40eb6b240c2a85f86a879f7b954ebf6bd9af8da7026d42d3fd79217"} Feb 18 12:02:59 crc kubenswrapper[4717]: I0218 12:02:59.019479 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e56e856d40eb6b240c2a85f86a879f7b954ebf6bd9af8da7026d42d3fd79217" Feb 18 12:02:59 crc kubenswrapper[4717]: I0218 12:02:59.019580 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.402569 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz"] Feb 18 12:03:05 crc kubenswrapper[4717]: E0218 12:03:05.403823 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a937a28-c870-4dde-a3ee-ebb15180d623" containerName="extract" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.403846 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a937a28-c870-4dde-a3ee-ebb15180d623" containerName="extract" Feb 18 12:03:05 crc kubenswrapper[4717]: E0218 12:03:05.403882 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a937a28-c870-4dde-a3ee-ebb15180d623" containerName="pull" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.403892 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a937a28-c870-4dde-a3ee-ebb15180d623" containerName="pull" Feb 18 12:03:05 crc kubenswrapper[4717]: E0218 12:03:05.403901 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a937a28-c870-4dde-a3ee-ebb15180d623" containerName="util" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.403910 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a937a28-c870-4dde-a3ee-ebb15180d623" containerName="util" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.404067 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a937a28-c870-4dde-a3ee-ebb15180d623" containerName="extract" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.404772 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.406579 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-79s9b" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.433248 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz"] Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.549055 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5zf8\" (UniqueName: \"kubernetes.io/projected/0807bf80-9dc3-48d9-8cbe-748f85b2089f-kube-api-access-w5zf8\") pod \"openstack-operator-controller-init-84d9946dcc-wjrpz\" (UID: \"0807bf80-9dc3-48d9-8cbe-748f85b2089f\") " pod="openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.650401 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5zf8\" (UniqueName: \"kubernetes.io/projected/0807bf80-9dc3-48d9-8cbe-748f85b2089f-kube-api-access-w5zf8\") pod \"openstack-operator-controller-init-84d9946dcc-wjrpz\" (UID: \"0807bf80-9dc3-48d9-8cbe-748f85b2089f\") " pod="openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.688466 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5zf8\" (UniqueName: \"kubernetes.io/projected/0807bf80-9dc3-48d9-8cbe-748f85b2089f-kube-api-access-w5zf8\") pod \"openstack-operator-controller-init-84d9946dcc-wjrpz\" (UID: \"0807bf80-9dc3-48d9-8cbe-748f85b2089f\") " pod="openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.729853 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz" Feb 18 12:03:05 crc kubenswrapper[4717]: I0218 12:03:05.968501 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz"] Feb 18 12:03:06 crc kubenswrapper[4717]: I0218 12:03:06.102417 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz" event={"ID":"0807bf80-9dc3-48d9-8cbe-748f85b2089f","Type":"ContainerStarted","Data":"daf9a03b82eac53dcc726637365f710da5e5de2a09ac6d32911f547fe6339dd4"} Feb 18 12:03:12 crc kubenswrapper[4717]: I0218 12:03:12.140467 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz" event={"ID":"0807bf80-9dc3-48d9-8cbe-748f85b2089f","Type":"ContainerStarted","Data":"999bd54e15c772509779ea9fe596d7a7726376b5db080e759ee60bc021371702"} Feb 18 12:03:12 crc kubenswrapper[4717]: I0218 12:03:12.152699 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz" Feb 18 12:03:12 crc kubenswrapper[4717]: I0218 12:03:12.180818 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz" podStartSLOduration=1.924021154 podStartE2EDuration="7.180799332s" podCreationTimestamp="2026-02-18 12:03:05 +0000 UTC" firstStartedPulling="2026-02-18 12:03:05.976022675 +0000 UTC m=+820.378123991" lastFinishedPulling="2026-02-18 12:03:11.232800853 +0000 UTC m=+825.634902169" observedRunningTime="2026-02-18 12:03:12.179129122 +0000 UTC m=+826.581230458" watchObservedRunningTime="2026-02-18 12:03:12.180799332 +0000 UTC m=+826.582900648" Feb 18 12:03:12 crc kubenswrapper[4717]: I0218 12:03:12.772946 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:03:12 crc kubenswrapper[4717]: I0218 12:03:12.773350 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:03:25 crc kubenswrapper[4717]: I0218 12:03:25.734251 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-84d9946dcc-wjrpz" Feb 18 12:03:42 crc kubenswrapper[4717]: I0218 12:03:42.772717 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:03:42 crc kubenswrapper[4717]: I0218 12:03:42.773681 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:03:42 crc kubenswrapper[4717]: I0218 12:03:42.773763 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:03:42 crc kubenswrapper[4717]: I0218 12:03:42.774629 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11a1c75eda22e757819ca65e0602c28b288a19c1473f7585c0555728262bdcd4"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:03:42 crc kubenswrapper[4717]: I0218 12:03:42.774695 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://11a1c75eda22e757819ca65e0602c28b288a19c1473f7585c0555728262bdcd4" gracePeriod=600 Feb 18 12:03:44 crc kubenswrapper[4717]: I0218 12:03:44.353501 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="11a1c75eda22e757819ca65e0602c28b288a19c1473f7585c0555728262bdcd4" exitCode=0 Feb 18 12:03:44 crc kubenswrapper[4717]: I0218 12:03:44.354171 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"11a1c75eda22e757819ca65e0602c28b288a19c1473f7585c0555728262bdcd4"} Feb 18 12:03:44 crc kubenswrapper[4717]: I0218 12:03:44.354200 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"ed34df06cd7a6fd0e8376737775379dc22fdfd46578a3bf716f5810ce3e16a5f"} Feb 18 12:03:44 crc kubenswrapper[4717]: I0218 12:03:44.354216 4717 scope.go:117] "RemoveContainer" containerID="dec67bbab9750d74b40d9a5adb8456536898db0fb4f915a1c596f22068aed83e" Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.851386 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl"] Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.852515 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl" Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.854547 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-2js7s" Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.865292 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl"] Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.869220 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm"] Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.870075 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm" Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.873432 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-w57k8" Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.902922 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm"] Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.907099 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2"] Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.907925 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2" Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.915228 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-wqvwd" Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.927622 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2"] Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.951512 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx"] Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.976251 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb54r\" (UniqueName: \"kubernetes.io/projected/3503ed6a-e486-404f-8ac3-df63d9d28c2d-kube-api-access-tb54r\") pod \"cinder-operator-controller-manager-5d946d989d-jhnvm\" (UID: \"3503ed6a-e486-404f-8ac3-df63d9d28c2d\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm" Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.976439 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhb6\" (UniqueName: \"kubernetes.io/projected/eca115e0-882d-4173-a714-1883215088b5-kube-api-access-prhb6\") pod \"barbican-operator-controller-manager-868647ff47-5lvkl\" (UID: \"eca115e0-882d-4173-a714-1883215088b5\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl" Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.976869 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx" Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.978896 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx"] Feb 18 12:03:45 crc kubenswrapper[4717]: I0218 12:03:45.983766 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-bcs5h" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.013152 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.015639 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.018628 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fcvh2" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.025289 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.026484 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.031862 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-v2wwp" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.033333 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.058352 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.077625 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-szzvb"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.078812 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.080026 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67vl8\" (UniqueName: \"kubernetes.io/projected/ba6c36b6-f446-4cc2-b1b0-9ebb6146dfde-kube-api-access-67vl8\") pod \"glance-operator-controller-manager-77987464f4-v6qrx\" (UID: \"ba6c36b6-f446-4cc2-b1b0-9ebb6146dfde\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.080070 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb54r\" (UniqueName: \"kubernetes.io/projected/3503ed6a-e486-404f-8ac3-df63d9d28c2d-kube-api-access-tb54r\") pod \"cinder-operator-controller-manager-5d946d989d-jhnvm\" (UID: \"3503ed6a-e486-404f-8ac3-df63d9d28c2d\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.080097 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thdx7\" (UniqueName: \"kubernetes.io/projected/986ac762-6758-4402-a5c9-849780ff7fab-kube-api-access-thdx7\") pod \"designate-operator-controller-manager-6d8bf5c495-2n2t2\" (UID: \"986ac762-6758-4402-a5c9-849780ff7fab\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.080122 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prhb6\" (UniqueName: \"kubernetes.io/projected/eca115e0-882d-4173-a714-1883215088b5-kube-api-access-prhb6\") pod \"barbican-operator-controller-manager-868647ff47-5lvkl\" (UID: \"eca115e0-882d-4173-a714-1883215088b5\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.086855 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nd9ll" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.087110 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.103894 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.105050 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.110621 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-64xjd" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.125095 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prhb6\" (UniqueName: \"kubernetes.io/projected/eca115e0-882d-4173-a714-1883215088b5-kube-api-access-prhb6\") pod \"barbican-operator-controller-manager-868647ff47-5lvkl\" (UID: \"eca115e0-882d-4173-a714-1883215088b5\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.130221 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb54r\" (UniqueName: \"kubernetes.io/projected/3503ed6a-e486-404f-8ac3-df63d9d28c2d-kube-api-access-tb54r\") pod \"cinder-operator-controller-manager-5d946d989d-jhnvm\" (UID: \"3503ed6a-e486-404f-8ac3-df63d9d28c2d\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.130880 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-szzvb"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.141125 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.145219 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.148422 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.150524 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-j7vqs" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.155422 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.156787 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.159712 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-95h52" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.166751 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.168997 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.182110 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bthlf\" (UniqueName: \"kubernetes.io/projected/f9800e95-aed6-4d9b-9e88-b6a5f303ee16-kube-api-access-bthlf\") pod \"heat-operator-controller-manager-69f49c598c-9cnsb\" (UID: \"f9800e95-aed6-4d9b-9e88-b6a5f303ee16\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.182182 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.182237 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8bvb\" (UniqueName: \"kubernetes.io/projected/b6f768c0-a04d-49f7-a0ae-5ea5ee09c26b-kube-api-access-h8bvb\") pod \"ironic-operator-controller-manager-554564d7fc-ldqdh\" (UID: \"b6f768c0-a04d-49f7-a0ae-5ea5ee09c26b\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.182281 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-696k9\" (UniqueName: \"kubernetes.io/projected/e82b0608-77fd-4e73-bafb-00a7b43b6299-kube-api-access-696k9\") pod \"horizon-operator-controller-manager-5b9b8895d5-rvxd2\" (UID: \"e82b0608-77fd-4e73-bafb-00a7b43b6299\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.182316 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67vl8\" (UniqueName: \"kubernetes.io/projected/ba6c36b6-f446-4cc2-b1b0-9ebb6146dfde-kube-api-access-67vl8\") pod \"glance-operator-controller-manager-77987464f4-v6qrx\" (UID: \"ba6c36b6-f446-4cc2-b1b0-9ebb6146dfde\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.182334 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kng9c\" (UniqueName: \"kubernetes.io/projected/96c16cf0-31b6-4830-b92f-f25b4ce11979-kube-api-access-kng9c\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.182368 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thdx7\" (UniqueName: \"kubernetes.io/projected/986ac762-6758-4402-a5c9-849780ff7fab-kube-api-access-thdx7\") pod \"designate-operator-controller-manager-6d8bf5c495-2n2t2\" (UID: \"986ac762-6758-4402-a5c9-849780ff7fab\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.188550 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.191196 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.192225 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.202643 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-f5bh7" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.202878 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.218542 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67vl8\" (UniqueName: \"kubernetes.io/projected/ba6c36b6-f446-4cc2-b1b0-9ebb6146dfde-kube-api-access-67vl8\") pod \"glance-operator-controller-manager-77987464f4-v6qrx\" (UID: \"ba6c36b6-f446-4cc2-b1b0-9ebb6146dfde\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.234448 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.237040 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thdx7\" (UniqueName: \"kubernetes.io/projected/986ac762-6758-4402-a5c9-849780ff7fab-kube-api-access-thdx7\") pod \"designate-operator-controller-manager-6d8bf5c495-2n2t2\" (UID: \"986ac762-6758-4402-a5c9-849780ff7fab\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.238080 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.272477 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.273417 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.276992 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kjzr6" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.285329 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kng9c\" (UniqueName: \"kubernetes.io/projected/96c16cf0-31b6-4830-b92f-f25b4ce11979-kube-api-access-kng9c\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.285456 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bthlf\" (UniqueName: \"kubernetes.io/projected/f9800e95-aed6-4d9b-9e88-b6a5f303ee16-kube-api-access-bthlf\") pod \"heat-operator-controller-manager-69f49c598c-9cnsb\" (UID: \"f9800e95-aed6-4d9b-9e88-b6a5f303ee16\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.285502 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.285575 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvtwk\" (UniqueName: \"kubernetes.io/projected/a14214f1-4961-4ade-ba45-d48139b6fd0d-kube-api-access-bvtwk\") pod \"mariadb-operator-controller-manager-6994f66f48-sl49j\" (UID: \"a14214f1-4961-4ade-ba45-d48139b6fd0d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.285640 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnjjg\" (UniqueName: \"kubernetes.io/projected/8d4a2d32-4724-4580-a542-7552e580ed15-kube-api-access-xnjjg\") pod \"keystone-operator-controller-manager-b4d948c87-2lmml\" (UID: \"8d4a2d32-4724-4580-a542-7552e580ed15\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.285675 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8bvb\" (UniqueName: \"kubernetes.io/projected/b6f768c0-a04d-49f7-a0ae-5ea5ee09c26b-kube-api-access-h8bvb\") pod \"ironic-operator-controller-manager-554564d7fc-ldqdh\" (UID: \"b6f768c0-a04d-49f7-a0ae-5ea5ee09c26b\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.285710 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-696k9\" (UniqueName: \"kubernetes.io/projected/e82b0608-77fd-4e73-bafb-00a7b43b6299-kube-api-access-696k9\") pod \"horizon-operator-controller-manager-5b9b8895d5-rvxd2\" (UID: \"e82b0608-77fd-4e73-bafb-00a7b43b6299\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.285745 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7gst\" (UniqueName: \"kubernetes.io/projected/927be7f4-3bc1-42c8-917f-8b898bbbc21a-kube-api-access-r7gst\") pod \"manila-operator-controller-manager-54f6768c69-nljkj\" (UID: \"927be7f4-3bc1-42c8-917f-8b898bbbc21a\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj" Feb 18 12:03:46 crc kubenswrapper[4717]: E0218 12:03:46.285985 4717 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 12:03:46 crc kubenswrapper[4717]: E0218 12:03:46.286048 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert podName:96c16cf0-31b6-4830-b92f-f25b4ce11979 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:46.786029222 +0000 UTC m=+861.188130538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert") pod "infra-operator-controller-manager-79d975b745-szzvb" (UID: "96c16cf0-31b6-4830-b92f-f25b4ce11979") : secret "infra-operator-webhook-server-cert" not found Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.317730 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.318753 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-696k9\" (UniqueName: \"kubernetes.io/projected/e82b0608-77fd-4e73-bafb-00a7b43b6299-kube-api-access-696k9\") pod \"horizon-operator-controller-manager-5b9b8895d5-rvxd2\" (UID: \"e82b0608-77fd-4e73-bafb-00a7b43b6299\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.318815 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.319460 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8bvb\" (UniqueName: \"kubernetes.io/projected/b6f768c0-a04d-49f7-a0ae-5ea5ee09c26b-kube-api-access-h8bvb\") pod \"ironic-operator-controller-manager-554564d7fc-ldqdh\" (UID: \"b6f768c0-a04d-49f7-a0ae-5ea5ee09c26b\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.320336 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rdrbt" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.335158 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.336356 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.338622 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.345224 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.352041 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kng9c\" (UniqueName: \"kubernetes.io/projected/96c16cf0-31b6-4830-b92f-f25b4ce11979-kube-api-access-kng9c\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.355732 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bthlf\" (UniqueName: \"kubernetes.io/projected/f9800e95-aed6-4d9b-9e88-b6a5f303ee16-kube-api-access-bthlf\") pod \"heat-operator-controller-manager-69f49c598c-9cnsb\" (UID: \"f9800e95-aed6-4d9b-9e88-b6a5f303ee16\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.365326 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-pj5s8" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.368175 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.381410 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.381865 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.387150 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvtwk\" (UniqueName: \"kubernetes.io/projected/a14214f1-4961-4ade-ba45-d48139b6fd0d-kube-api-access-bvtwk\") pod \"mariadb-operator-controller-manager-6994f66f48-sl49j\" (UID: \"a14214f1-4961-4ade-ba45-d48139b6fd0d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.387493 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnjjg\" (UniqueName: \"kubernetes.io/projected/8d4a2d32-4724-4580-a542-7552e580ed15-kube-api-access-xnjjg\") pod \"keystone-operator-controller-manager-b4d948c87-2lmml\" (UID: \"8d4a2d32-4724-4580-a542-7552e580ed15\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.387643 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7gst\" (UniqueName: \"kubernetes.io/projected/927be7f4-3bc1-42c8-917f-8b898bbbc21a-kube-api-access-r7gst\") pod \"manila-operator-controller-manager-54f6768c69-nljkj\" (UID: \"927be7f4-3bc1-42c8-917f-8b898bbbc21a\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.387891 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wc7q\" (UniqueName: \"kubernetes.io/projected/5d07e1a5-0372-4721-ac7a-66c568e32be1-kube-api-access-4wc7q\") pod \"neutron-operator-controller-manager-64ddbf8bb-52fmp\" (UID: \"5d07e1a5-0372-4721-ac7a-66c568e32be1\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.389197 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.403991 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.404915 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.410232 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvtwk\" (UniqueName: \"kubernetes.io/projected/a14214f1-4961-4ade-ba45-d48139b6fd0d-kube-api-access-bvtwk\") pod \"mariadb-operator-controller-manager-6994f66f48-sl49j\" (UID: \"a14214f1-4961-4ade-ba45-d48139b6fd0d\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.412118 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.413453 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnjjg\" (UniqueName: \"kubernetes.io/projected/8d4a2d32-4724-4580-a542-7552e580ed15-kube-api-access-xnjjg\") pod \"keystone-operator-controller-manager-b4d948c87-2lmml\" (UID: \"8d4a2d32-4724-4580-a542-7552e580ed15\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.413810 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-8gl2d" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.415095 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7gst\" (UniqueName: \"kubernetes.io/projected/927be7f4-3bc1-42c8-917f-8b898bbbc21a-kube-api-access-r7gst\") pod \"manila-operator-controller-manager-54f6768c69-nljkj\" (UID: \"927be7f4-3bc1-42c8-917f-8b898bbbc21a\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.419124 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.420063 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.422552 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6pcd5" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.459374 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.493379 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.513173 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.516424 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.517571 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wc7q\" (UniqueName: \"kubernetes.io/projected/5d07e1a5-0372-4721-ac7a-66c568e32be1-kube-api-access-4wc7q\") pod \"neutron-operator-controller-manager-64ddbf8bb-52fmp\" (UID: \"5d07e1a5-0372-4721-ac7a-66c568e32be1\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.517669 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkc7q\" (UniqueName: \"kubernetes.io/projected/7c5e0309-c138-4668-bad9-eacff0124d24-kube-api-access-hkc7q\") pod \"nova-operator-controller-manager-567668f5cf-xdtl8\" (UID: \"7c5e0309-c138-4668-bad9-eacff0124d24\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.517722 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkvkv\" (UniqueName: \"kubernetes.io/projected/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-kube-api-access-xkvkv\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.526280 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzwtx\" (UniqueName: \"kubernetes.io/projected/88c2fec0-988b-4496-b054-43f965e23324-kube-api-access-bzwtx\") pod \"octavia-operator-controller-manager-69f8888797-n7n5r\" (UID: \"88c2fec0-988b-4496-b054-43f965e23324\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.565555 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.572990 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nz542" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.575074 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.588336 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wc7q\" (UniqueName: \"kubernetes.io/projected/5d07e1a5-0372-4721-ac7a-66c568e32be1-kube-api-access-4wc7q\") pod \"neutron-operator-controller-manager-64ddbf8bb-52fmp\" (UID: \"5d07e1a5-0372-4721-ac7a-66c568e32be1\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.622832 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.622876 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.624299 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.629394 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-26mhv"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.630294 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.630358 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkf6p\" (UniqueName: \"kubernetes.io/projected/196844a3-3220-4557-93a1-dc0887bbb53f-kube-api-access-mkf6p\") pod \"ovn-operator-controller-manager-d44cf6b75-lx8wj\" (UID: \"196844a3-3220-4557-93a1-dc0887bbb53f\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.630394 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkc7q\" (UniqueName: \"kubernetes.io/projected/7c5e0309-c138-4668-bad9-eacff0124d24-kube-api-access-hkc7q\") pod \"nova-operator-controller-manager-567668f5cf-xdtl8\" (UID: \"7c5e0309-c138-4668-bad9-eacff0124d24\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.630426 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkvkv\" (UniqueName: \"kubernetes.io/projected/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-kube-api-access-xkvkv\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.630500 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzwtx\" (UniqueName: \"kubernetes.io/projected/88c2fec0-988b-4496-b054-43f965e23324-kube-api-access-bzwtx\") pod \"octavia-operator-controller-manager-69f8888797-n7n5r\" (UID: \"88c2fec0-988b-4496-b054-43f965e23324\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.630678 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" Feb 18 12:03:46 crc kubenswrapper[4717]: E0218 12:03:46.630851 4717 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 12:03:46 crc kubenswrapper[4717]: E0218 12:03:46.630903 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert podName:6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe nodeName:}" failed. No retries permitted until 2026-02-18 12:03:47.13088358 +0000 UTC m=+861.532984896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" (UID: "6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.637643 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2mcdf" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.651661 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.671744 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.677557 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkc7q\" (UniqueName: \"kubernetes.io/projected/7c5e0309-c138-4668-bad9-eacff0124d24-kube-api-access-hkc7q\") pod \"nova-operator-controller-manager-567668f5cf-xdtl8\" (UID: \"7c5e0309-c138-4668-bad9-eacff0124d24\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.704053 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-26mhv"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.716406 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzwtx\" (UniqueName: \"kubernetes.io/projected/88c2fec0-988b-4496-b054-43f965e23324-kube-api-access-bzwtx\") pod \"octavia-operator-controller-manager-69f8888797-n7n5r\" (UID: \"88c2fec0-988b-4496-b054-43f965e23324\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.717226 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkvkv\" (UniqueName: \"kubernetes.io/projected/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-kube-api-access-xkvkv\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.737569 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkf6p\" (UniqueName: \"kubernetes.io/projected/196844a3-3220-4557-93a1-dc0887bbb53f-kube-api-access-mkf6p\") pod \"ovn-operator-controller-manager-d44cf6b75-lx8wj\" (UID: \"196844a3-3220-4557-93a1-dc0887bbb53f\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.737615 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zgc\" (UniqueName: \"kubernetes.io/projected/3faac3ae-2788-4a36-8241-09a601267885-kube-api-access-n4zgc\") pod \"swift-operator-controller-manager-68f46476f-26mhv\" (UID: \"3faac3ae-2788-4a36-8241-09a601267885\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.737685 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgzjb\" (UniqueName: \"kubernetes.io/projected/3b988944-4f1b-4fb3-89ff-b1a0e61853dc-kube-api-access-tgzjb\") pod \"placement-operator-controller-manager-8497b45c89-j9pkf\" (UID: \"3b988944-4f1b-4fb3-89ff-b1a0e61853dc\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.742316 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.755135 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.765380 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7lcbq" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.770927 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.803670 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkf6p\" (UniqueName: \"kubernetes.io/projected/196844a3-3220-4557-93a1-dc0887bbb53f-kube-api-access-mkf6p\") pod \"ovn-operator-controller-manager-d44cf6b75-lx8wj\" (UID: \"196844a3-3220-4557-93a1-dc0887bbb53f\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.811564 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.838835 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4zgc\" (UniqueName: \"kubernetes.io/projected/3faac3ae-2788-4a36-8241-09a601267885-kube-api-access-n4zgc\") pod \"swift-operator-controller-manager-68f46476f-26mhv\" (UID: \"3faac3ae-2788-4a36-8241-09a601267885\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.838891 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.838944 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj2g2\" (UniqueName: \"kubernetes.io/projected/e95271e1-5edd-4862-9dd9-e7ad1feb0ed0-kube-api-access-mj2g2\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cmrvc\" (UID: \"e95271e1-5edd-4862-9dd9-e7ad1feb0ed0\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.838977 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgzjb\" (UniqueName: \"kubernetes.io/projected/3b988944-4f1b-4fb3-89ff-b1a0e61853dc-kube-api-access-tgzjb\") pod \"placement-operator-controller-manager-8497b45c89-j9pkf\" (UID: \"3b988944-4f1b-4fb3-89ff-b1a0e61853dc\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" Feb 18 12:03:46 crc kubenswrapper[4717]: E0218 12:03:46.839512 4717 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 12:03:46 crc kubenswrapper[4717]: E0218 12:03:46.839565 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert podName:96c16cf0-31b6-4830-b92f-f25b4ce11979 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:47.839545934 +0000 UTC m=+862.241647250 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert") pod "infra-operator-controller-manager-79d975b745-szzvb" (UID: "96c16cf0-31b6-4830-b92f-f25b4ce11979") : secret "infra-operator-webhook-server-cert" not found Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.862418 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4zgc\" (UniqueName: \"kubernetes.io/projected/3faac3ae-2788-4a36-8241-09a601267885-kube-api-access-n4zgc\") pod \"swift-operator-controller-manager-68f46476f-26mhv\" (UID: \"3faac3ae-2788-4a36-8241-09a601267885\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.863393 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-9tf7v"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.864356 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-9tf7v" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.877100 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.878383 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgzjb\" (UniqueName: \"kubernetes.io/projected/3b988944-4f1b-4fb3-89ff-b1a0e61853dc-kube-api-access-tgzjb\") pod \"placement-operator-controller-manager-8497b45c89-j9pkf\" (UID: \"3b988944-4f1b-4fb3-89ff-b1a0e61853dc\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.886883 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-z97sd" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.907764 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-9tf7v"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.946539 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxstd\" (UniqueName: \"kubernetes.io/projected/e2e22987-3a27-4550-8593-c54e5628e941-kube-api-access-gxstd\") pod \"test-operator-controller-manager-7866795846-9tf7v\" (UID: \"e2e22987-3a27-4550-8593-c54e5628e941\") " pod="openstack-operators/test-operator-controller-manager-7866795846-9tf7v" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.946619 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj2g2\" (UniqueName: \"kubernetes.io/projected/e95271e1-5edd-4862-9dd9-e7ad1feb0ed0-kube-api-access-mj2g2\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cmrvc\" (UID: \"e95271e1-5edd-4862-9dd9-e7ad1feb0ed0\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.960689 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.980562 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk"] Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.981533 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.983023 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj2g2\" (UniqueName: \"kubernetes.io/projected/e95271e1-5edd-4862-9dd9-e7ad1feb0ed0-kube-api-access-mj2g2\") pod \"telemetry-operator-controller-manager-7f45b4ff68-cmrvc\" (UID: \"e95271e1-5edd-4862-9dd9-e7ad1feb0ed0\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.992409 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pbcgr" Feb 18 12:03:46 crc kubenswrapper[4717]: I0218 12:03:46.995813 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.005359 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.030304 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.031732 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.036791 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.044427 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.047761 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxstd\" (UniqueName: \"kubernetes.io/projected/e2e22987-3a27-4550-8593-c54e5628e941-kube-api-access-gxstd\") pod \"test-operator-controller-manager-7866795846-9tf7v\" (UID: \"e2e22987-3a27-4550-8593-c54e5628e941\") " pod="openstack-operators/test-operator-controller-manager-7866795846-9tf7v" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.047898 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l759\" (UniqueName: \"kubernetes.io/projected/886d7474-df3b-4777-bd48-d3bf188f7fc9-kube-api-access-8l759\") pod \"watcher-operator-controller-manager-5db88f68c-h59sk\" (UID: \"886d7474-df3b-4777-bd48-d3bf188f7fc9\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.047773 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-j8k4r" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.053620 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.062800 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.104726 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.122126 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.123376 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.131697 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zl52v" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.133230 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.135389 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxstd\" (UniqueName: \"kubernetes.io/projected/e2e22987-3a27-4550-8593-c54e5628e941-kube-api-access-gxstd\") pod \"test-operator-controller-manager-7866795846-9tf7v\" (UID: \"e2e22987-3a27-4550-8593-c54e5628e941\") " pod="openstack-operators/test-operator-controller-manager-7866795846-9tf7v" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.149313 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l759\" (UniqueName: \"kubernetes.io/projected/886d7474-df3b-4777-bd48-d3bf188f7fc9-kube-api-access-8l759\") pod \"watcher-operator-controller-manager-5db88f68c-h59sk\" (UID: \"886d7474-df3b-4777-bd48-d3bf188f7fc9\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.149405 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.149502 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v78kc\" (UniqueName: \"kubernetes.io/projected/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-kube-api-access-v78kc\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.149547 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.149578 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.151569 4717 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.151633 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert podName:6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe nodeName:}" failed. No retries permitted until 2026-02-18 12:03:48.151614734 +0000 UTC m=+862.553716050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" (UID: "6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.154996 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.201585 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l759\" (UniqueName: \"kubernetes.io/projected/886d7474-df3b-4777-bd48-d3bf188f7fc9-kube-api-access-8l759\") pod \"watcher-operator-controller-manager-5db88f68c-h59sk\" (UID: \"886d7474-df3b-4777-bd48-d3bf188f7fc9\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.252425 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.252502 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-9tf7v" Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.253439 4717 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.253524 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs podName:fa9ea26a-44d8-4c4d-8766-d1c19fa59d70 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:47.753484723 +0000 UTC m=+862.155586029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs") pod "openstack-operator-controller-manager-846fd54586-rqvhv" (UID: "fa9ea26a-44d8-4c4d-8766-d1c19fa59d70") : secret "metrics-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.255103 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.255224 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcgcs\" (UniqueName: \"kubernetes.io/projected/089bc44f-bd8a-45b5-a497-17cfc2d38bee-kube-api-access-zcgcs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jrzq4\" (UID: \"089bc44f-bd8a-45b5-a497-17cfc2d38bee\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.255348 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v78kc\" (UniqueName: \"kubernetes.io/projected/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-kube-api-access-v78kc\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.255656 4717 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.256200 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs podName:fa9ea26a-44d8-4c4d-8766-d1c19fa59d70 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:47.75572085 +0000 UTC m=+862.157822166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs") pod "openstack-operator-controller-manager-846fd54586-rqvhv" (UID: "fa9ea26a-44d8-4c4d-8766-d1c19fa59d70") : secret "webhook-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.284903 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v78kc\" (UniqueName: \"kubernetes.io/projected/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-kube-api-access-v78kc\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.340382 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.349456 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.366408 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcgcs\" (UniqueName: \"kubernetes.io/projected/089bc44f-bd8a-45b5-a497-17cfc2d38bee-kube-api-access-zcgcs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jrzq4\" (UID: \"089bc44f-bd8a-45b5-a497-17cfc2d38bee\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.388318 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.395630 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcgcs\" (UniqueName: \"kubernetes.io/projected/089bc44f-bd8a-45b5-a497-17cfc2d38bee-kube-api-access-zcgcs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jrzq4\" (UID: \"089bc44f-bd8a-45b5-a497-17cfc2d38bee\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.403106 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2" event={"ID":"986ac762-6758-4402-a5c9-849780ff7fab","Type":"ContainerStarted","Data":"5eb8b5cf143ba8f8658de8f6032cec5c7a0c96664b0bbe23a95a54c67a8e3d43"} Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.632655 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.677405 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.687208 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.785457 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.785685 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.785771 4717 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.785856 4717 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.785864 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs podName:fa9ea26a-44d8-4c4d-8766-d1c19fa59d70 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:48.785841174 +0000 UTC m=+863.187942490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs") pod "openstack-operator-controller-manager-846fd54586-rqvhv" (UID: "fa9ea26a-44d8-4c4d-8766-d1c19fa59d70") : secret "metrics-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.785978 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs podName:fa9ea26a-44d8-4c4d-8766-d1c19fa59d70 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:48.785918326 +0000 UTC m=+863.188019642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs") pod "openstack-operator-controller-manager-846fd54586-rqvhv" (UID: "fa9ea26a-44d8-4c4d-8766-d1c19fa59d70") : secret "webhook-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.887136 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.887514 4717 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: E0218 12:03:47.887640 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert podName:96c16cf0-31b6-4830-b92f-f25b4ce11979 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:49.887604089 +0000 UTC m=+864.289705585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert") pod "infra-operator-controller-manager-79d975b745-szzvb" (UID: "96c16cf0-31b6-4830-b92f-f25b4ce11979") : secret "infra-operator-webhook-server-cert" not found Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.935956 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2"] Feb 18 12:03:47 crc kubenswrapper[4717]: W0218 12:03:47.957057 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode82b0608_77fd_4e73_bafb_00a7b43b6299.slice/crio-f5b16b69d756bdacecdc17a922dc158195aa5268007e3f68aba858e37790bec7 WatchSource:0}: Error finding container f5b16b69d756bdacecdc17a922dc158195aa5268007e3f68aba858e37790bec7: Status 404 returned error can't find the container with id f5b16b69d756bdacecdc17a922dc158195aa5268007e3f68aba858e37790bec7 Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.973031 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh"] Feb 18 12:03:47 crc kubenswrapper[4717]: I0218 12:03:47.994739 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j"] Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.015467 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml"] Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.029378 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp"] Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.110497 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r"] Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.129194 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8"] Feb 18 12:03:48 crc kubenswrapper[4717]: W0218 12:03:48.129772 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c5e0309_c138_4668_bad9_eacff0124d24.slice/crio-31fbafdb63c5dc15ccdc31a6617494dfae0a7b317ff8ec5e34349a3b29b869c0 WatchSource:0}: Error finding container 31fbafdb63c5dc15ccdc31a6617494dfae0a7b317ff8ec5e34349a3b29b869c0: Status 404 returned error can't find the container with id 31fbafdb63c5dc15ccdc31a6617494dfae0a7b317ff8ec5e34349a3b29b869c0 Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.195894 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.196237 4717 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.196406 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert podName:6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe nodeName:}" failed. No retries permitted until 2026-02-18 12:03:50.19637915 +0000 UTC m=+864.598480466 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" (UID: "6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.235366 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj"] Feb 18 12:03:48 crc kubenswrapper[4717]: W0218 12:03:48.260550 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod927be7f4_3bc1_42c8_917f_8b898bbbc21a.slice/crio-f6834cc6c516a4a9bac44f214c019d255acabc63ce2d1281ec21829074add32d WatchSource:0}: Error finding container f6834cc6c516a4a9bac44f214c019d255acabc63ce2d1281ec21829074add32d: Status 404 returned error can't find the container with id f6834cc6c516a4a9bac44f214c019d255acabc63ce2d1281ec21829074add32d Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.450545 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc"] Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.468224 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2" event={"ID":"e82b0608-77fd-4e73-bafb-00a7b43b6299","Type":"ContainerStarted","Data":"f5b16b69d756bdacecdc17a922dc158195aa5268007e3f68aba858e37790bec7"} Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.470222 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm" event={"ID":"3503ed6a-e486-404f-8ac3-df63d9d28c2d","Type":"ContainerStarted","Data":"7fe7194a36804b272e37e52c7b6b2741879d9523d0095edd84cb0fb70a0383a2"} Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.476187 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl" event={"ID":"eca115e0-882d-4173-a714-1883215088b5","Type":"ContainerStarted","Data":"f5baa511e9da5ce74a1c5b607036e48e38ce50e7380ffac16cf1e175c8eabbdd"} Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.478028 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r" event={"ID":"88c2fec0-988b-4496-b054-43f965e23324","Type":"ContainerStarted","Data":"cbdfc1b45fdc0b771cc7161ab449d22f9219937f2f23fc24363c2597f066be93"} Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.478354 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj"] Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.481344 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j" event={"ID":"a14214f1-4961-4ade-ba45-d48139b6fd0d","Type":"ContainerStarted","Data":"efe495206b92d3b3332b4ca5fc42ef1d2fde1d3aa66a2da8ec0838106df2e851"} Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.494447 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf"] Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.509463 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-9tf7v"] Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.516565 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx" event={"ID":"ba6c36b6-f446-4cc2-b1b0-9ebb6146dfde","Type":"ContainerStarted","Data":"c62619363936e68d24d708928596eaeed326ce4d4494ab83cfa1db1875a6b5fd"} Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.519126 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mkf6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-lx8wj_openstack-operators(196844a3-3220-4557-93a1-dc0887bbb53f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.519218 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgzjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-j9pkf_openstack-operators(3b988944-4f1b-4fb3-89ff-b1a0e61853dc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.519881 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp" event={"ID":"5d07e1a5-0372-4721-ac7a-66c568e32be1","Type":"ContainerStarted","Data":"373db774575b21382387add69e6dc36ff6089c1a43aeffa2de3ccd2a690b0e00"} Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.520214 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" podUID="196844a3-3220-4557-93a1-dc0887bbb53f" Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.520375 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" podUID="3b988944-4f1b-4fb3-89ff-b1a0e61853dc" Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.522917 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk"] Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.526997 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" event={"ID":"7c5e0309-c138-4668-bad9-eacff0124d24","Type":"ContainerStarted","Data":"31fbafdb63c5dc15ccdc31a6617494dfae0a7b317ff8ec5e34349a3b29b869c0"} Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.529024 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4zgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-26mhv_openstack-operators(3faac3ae-2788-4a36-8241-09a601267885): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.529349 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" event={"ID":"f9800e95-aed6-4d9b-9e88-b6a5f303ee16","Type":"ContainerStarted","Data":"4dedac7ebcc0f21fa20fea14462827c966b39f57c4b441c56cd482310045aae0"} Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.530271 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" podUID="3faac3ae-2788-4a36-8241-09a601267885" Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.531540 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj" event={"ID":"927be7f4-3bc1-42c8-917f-8b898bbbc21a","Type":"ContainerStarted","Data":"f6834cc6c516a4a9bac44f214c019d255acabc63ce2d1281ec21829074add32d"} Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.533039 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" event={"ID":"8d4a2d32-4724-4580-a542-7552e580ed15","Type":"ContainerStarted","Data":"21be54c01f91ee42669aa0609677a80fb10a94e8921ce64b768606dba71cac47"} Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.534426 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-26mhv"] Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.534724 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh" event={"ID":"b6f768c0-a04d-49f7-a0ae-5ea5ee09c26b","Type":"ContainerStarted","Data":"374a819ef3279510ede8fbf540deadf52ea464fa1895d1ec8fa0b328cc6267c9"} Feb 18 12:03:48 crc kubenswrapper[4717]: W0218 12:03:48.539468 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod089bc44f_bd8a_45b5_a497_17cfc2d38bee.slice/crio-030141dad1066790b1eb0c7ea3a0af15c04aca94219b97862e6b5bd242e9fba7 WatchSource:0}: Error finding container 030141dad1066790b1eb0c7ea3a0af15c04aca94219b97862e6b5bd242e9fba7: Status 404 returned error can't find the container with id 030141dad1066790b1eb0c7ea3a0af15c04aca94219b97862e6b5bd242e9fba7 Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.540120 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4"] Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.545734 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcgcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jrzq4_openstack-operators(089bc44f-bd8a-45b5-a497-17cfc2d38bee): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.548103 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" podUID="089bc44f-bd8a-45b5-a497-17cfc2d38bee" Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.812670 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:48 crc kubenswrapper[4717]: I0218 12:03:48.812812 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.812933 4717 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.813018 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs podName:fa9ea26a-44d8-4c4d-8766-d1c19fa59d70 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:50.812989074 +0000 UTC m=+865.215090390 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs") pod "openstack-operator-controller-manager-846fd54586-rqvhv" (UID: "fa9ea26a-44d8-4c4d-8766-d1c19fa59d70") : secret "webhook-server-cert" not found Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.812933 4717 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 12:03:48 crc kubenswrapper[4717]: E0218 12:03:48.813064 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs podName:fa9ea26a-44d8-4c4d-8766-d1c19fa59d70 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:50.813053396 +0000 UTC m=+865.215154712 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs") pod "openstack-operator-controller-manager-846fd54586-rqvhv" (UID: "fa9ea26a-44d8-4c4d-8766-d1c19fa59d70") : secret "metrics-server-cert" not found Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.569473 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" event={"ID":"089bc44f-bd8a-45b5-a497-17cfc2d38bee","Type":"ContainerStarted","Data":"030141dad1066790b1eb0c7ea3a0af15c04aca94219b97862e6b5bd242e9fba7"} Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.592704 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" event={"ID":"196844a3-3220-4557-93a1-dc0887bbb53f","Type":"ContainerStarted","Data":"29d853c5f4222dc7984c3367a9fd1af396418087e9cff3d37d8e9f80cab02bce"} Feb 18 12:03:49 crc kubenswrapper[4717]: E0218 12:03:49.592819 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" podUID="089bc44f-bd8a-45b5-a497-17cfc2d38bee" Feb 18 12:03:49 crc kubenswrapper[4717]: E0218 12:03:49.615505 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" podUID="196844a3-3220-4557-93a1-dc0887bbb53f" Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.617841 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-9tf7v" event={"ID":"e2e22987-3a27-4550-8593-c54e5628e941","Type":"ContainerStarted","Data":"6f4bb3fa7243902661e68688eaae01e84210b9ff87d9c4847a1e1898fb458254"} Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.638143 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc" event={"ID":"e95271e1-5edd-4862-9dd9-e7ad1feb0ed0","Type":"ContainerStarted","Data":"9679f5aa489bbb764abe5419ac0d0cb87215aa9f400c3b49b0f00b1a6aa7827e"} Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.658877 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" event={"ID":"3b988944-4f1b-4fb3-89ff-b1a0e61853dc","Type":"ContainerStarted","Data":"5b99172f8dd56d999c8a88cbc7fa07c279d8301e9227997c19efdf7fe243a033"} Feb 18 12:03:49 crc kubenswrapper[4717]: E0218 12:03:49.663027 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" podUID="3b988944-4f1b-4fb3-89ff-b1a0e61853dc" Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.680668 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" event={"ID":"886d7474-df3b-4777-bd48-d3bf188f7fc9","Type":"ContainerStarted","Data":"e7861a9c0a233512a935a72e3ed94638bf78717e4803fef9191dccf8ff95e76c"} Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.693323 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" event={"ID":"3faac3ae-2788-4a36-8241-09a601267885","Type":"ContainerStarted","Data":"5250ed766b4107b58368adfa4162321ba46744760d6a90e9eb79e3ba4730927c"} Feb 18 12:03:49 crc kubenswrapper[4717]: E0218 12:03:49.700303 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" podUID="3faac3ae-2788-4a36-8241-09a601267885" Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.825495 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z4jwc"] Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.829759 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4jwc"] Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.829882 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.937109 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-utilities\") pod \"certified-operators-z4jwc\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.937273 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.937297 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2vvr\" (UniqueName: \"kubernetes.io/projected/35c40328-be7c-4470-80fe-6bdf6e254d8f-kube-api-access-r2vvr\") pod \"certified-operators-z4jwc\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:49 crc kubenswrapper[4717]: I0218 12:03:49.937325 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-catalog-content\") pod \"certified-operators-z4jwc\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:49 crc kubenswrapper[4717]: E0218 12:03:49.937464 4717 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 12:03:49 crc kubenswrapper[4717]: E0218 12:03:49.937518 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert podName:96c16cf0-31b6-4830-b92f-f25b4ce11979 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:53.93749719 +0000 UTC m=+868.339598506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert") pod "infra-operator-controller-manager-79d975b745-szzvb" (UID: "96c16cf0-31b6-4830-b92f-f25b4ce11979") : secret "infra-operator-webhook-server-cert" not found Feb 18 12:03:50 crc kubenswrapper[4717]: I0218 12:03:50.038220 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-utilities\") pod \"certified-operators-z4jwc\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:50 crc kubenswrapper[4717]: I0218 12:03:50.038347 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2vvr\" (UniqueName: \"kubernetes.io/projected/35c40328-be7c-4470-80fe-6bdf6e254d8f-kube-api-access-r2vvr\") pod \"certified-operators-z4jwc\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:50 crc kubenswrapper[4717]: I0218 12:03:50.038377 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-catalog-content\") pod \"certified-operators-z4jwc\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:50 crc kubenswrapper[4717]: I0218 12:03:50.038784 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-utilities\") pod \"certified-operators-z4jwc\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:50 crc kubenswrapper[4717]: I0218 12:03:50.039811 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-catalog-content\") pod \"certified-operators-z4jwc\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:50 crc kubenswrapper[4717]: I0218 12:03:50.068805 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2vvr\" (UniqueName: \"kubernetes.io/projected/35c40328-be7c-4470-80fe-6bdf6e254d8f-kube-api-access-r2vvr\") pod \"certified-operators-z4jwc\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:50 crc kubenswrapper[4717]: I0218 12:03:50.169908 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:03:50 crc kubenswrapper[4717]: I0218 12:03:50.244174 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:03:50 crc kubenswrapper[4717]: E0218 12:03:50.245602 4717 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 12:03:50 crc kubenswrapper[4717]: E0218 12:03:50.245747 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert podName:6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe nodeName:}" failed. No retries permitted until 2026-02-18 12:03:54.245713105 +0000 UTC m=+868.647814421 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" (UID: "6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 12:03:50 crc kubenswrapper[4717]: E0218 12:03:50.705586 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" podUID="196844a3-3220-4557-93a1-dc0887bbb53f" Feb 18 12:03:50 crc kubenswrapper[4717]: E0218 12:03:50.705661 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" podUID="3faac3ae-2788-4a36-8241-09a601267885" Feb 18 12:03:50 crc kubenswrapper[4717]: E0218 12:03:50.711846 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" podUID="089bc44f-bd8a-45b5-a497-17cfc2d38bee" Feb 18 12:03:50 crc kubenswrapper[4717]: E0218 12:03:50.711862 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" podUID="3b988944-4f1b-4fb3-89ff-b1a0e61853dc" Feb 18 12:03:50 crc kubenswrapper[4717]: I0218 12:03:50.866729 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:50 crc kubenswrapper[4717]: I0218 12:03:50.866925 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:50 crc kubenswrapper[4717]: E0218 12:03:50.866967 4717 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 12:03:50 crc kubenswrapper[4717]: E0218 12:03:50.867069 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs podName:fa9ea26a-44d8-4c4d-8766-d1c19fa59d70 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:54.86704849 +0000 UTC m=+869.269149806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs") pod "openstack-operator-controller-manager-846fd54586-rqvhv" (UID: "fa9ea26a-44d8-4c4d-8766-d1c19fa59d70") : secret "metrics-server-cert" not found Feb 18 12:03:50 crc kubenswrapper[4717]: E0218 12:03:50.867189 4717 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 12:03:50 crc kubenswrapper[4717]: E0218 12:03:50.867325 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs podName:fa9ea26a-44d8-4c4d-8766-d1c19fa59d70 nodeName:}" failed. No retries permitted until 2026-02-18 12:03:54.867296078 +0000 UTC m=+869.269397394 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs") pod "openstack-operator-controller-manager-846fd54586-rqvhv" (UID: "fa9ea26a-44d8-4c4d-8766-d1c19fa59d70") : secret "webhook-server-cert" not found Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.393693 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l9p7h"] Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.398752 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.413393 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9p7h"] Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.520810 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwtml\" (UniqueName: \"kubernetes.io/projected/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-kube-api-access-bwtml\") pod \"redhat-operators-l9p7h\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.520881 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-catalog-content\") pod \"redhat-operators-l9p7h\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.520908 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-utilities\") pod \"redhat-operators-l9p7h\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.622648 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwtml\" (UniqueName: \"kubernetes.io/projected/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-kube-api-access-bwtml\") pod \"redhat-operators-l9p7h\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.622801 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-catalog-content\") pod \"redhat-operators-l9p7h\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.622856 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-utilities\") pod \"redhat-operators-l9p7h\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.623705 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-utilities\") pod \"redhat-operators-l9p7h\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.623696 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-catalog-content\") pod \"redhat-operators-l9p7h\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.670447 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwtml\" (UniqueName: \"kubernetes.io/projected/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-kube-api-access-bwtml\") pod \"redhat-operators-l9p7h\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:52 crc kubenswrapper[4717]: I0218 12:03:52.739292 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:03:53 crc kubenswrapper[4717]: I0218 12:03:53.948116 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:03:53 crc kubenswrapper[4717]: E0218 12:03:53.948414 4717 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 12:03:53 crc kubenswrapper[4717]: E0218 12:03:53.948831 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert podName:96c16cf0-31b6-4830-b92f-f25b4ce11979 nodeName:}" failed. No retries permitted until 2026-02-18 12:04:01.948800022 +0000 UTC m=+876.350901498 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert") pod "infra-operator-controller-manager-79d975b745-szzvb" (UID: "96c16cf0-31b6-4830-b92f-f25b4ce11979") : secret "infra-operator-webhook-server-cert" not found Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.189830 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-84h4d"] Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.191506 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.207696 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84h4d"] Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.254950 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:03:54 crc kubenswrapper[4717]: E0218 12:03:54.255249 4717 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 12:03:54 crc kubenswrapper[4717]: E0218 12:03:54.255460 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert podName:6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe nodeName:}" failed. No retries permitted until 2026-02-18 12:04:02.255427099 +0000 UTC m=+876.657528575 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" (UID: "6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.356974 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-catalog-content\") pod \"community-operators-84h4d\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.357026 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-857d5\" (UniqueName: \"kubernetes.io/projected/68536fe0-b0a2-45b8-98d0-a387d8756183-kube-api-access-857d5\") pod \"community-operators-84h4d\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.357099 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-utilities\") pod \"community-operators-84h4d\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.458493 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-utilities\") pod \"community-operators-84h4d\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.458643 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-catalog-content\") pod \"community-operators-84h4d\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.458663 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-857d5\" (UniqueName: \"kubernetes.io/projected/68536fe0-b0a2-45b8-98d0-a387d8756183-kube-api-access-857d5\") pod \"community-operators-84h4d\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.459240 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-utilities\") pod \"community-operators-84h4d\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.459322 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-catalog-content\") pod \"community-operators-84h4d\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.486222 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-857d5\" (UniqueName: \"kubernetes.io/projected/68536fe0-b0a2-45b8-98d0-a387d8756183-kube-api-access-857d5\") pod \"community-operators-84h4d\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.514097 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.967178 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:54 crc kubenswrapper[4717]: I0218 12:03:54.967327 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:03:54 crc kubenswrapper[4717]: E0218 12:03:54.967399 4717 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 12:03:54 crc kubenswrapper[4717]: E0218 12:03:54.967482 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs podName:fa9ea26a-44d8-4c4d-8766-d1c19fa59d70 nodeName:}" failed. No retries permitted until 2026-02-18 12:04:02.967460671 +0000 UTC m=+877.369561987 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs") pod "openstack-operator-controller-manager-846fd54586-rqvhv" (UID: "fa9ea26a-44d8-4c4d-8766-d1c19fa59d70") : secret "webhook-server-cert" not found Feb 18 12:03:54 crc kubenswrapper[4717]: E0218 12:03:54.967581 4717 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 12:03:54 crc kubenswrapper[4717]: E0218 12:03:54.967690 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs podName:fa9ea26a-44d8-4c4d-8766-d1c19fa59d70 nodeName:}" failed. No retries permitted until 2026-02-18 12:04:02.967664937 +0000 UTC m=+877.369766423 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs") pod "openstack-operator-controller-manager-846fd54586-rqvhv" (UID: "fa9ea26a-44d8-4c4d-8766-d1c19fa59d70") : secret "metrics-server-cert" not found Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.001776 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.017584 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96c16cf0-31b6-4830-b92f-f25b4ce11979-cert\") pod \"infra-operator-controller-manager-79d975b745-szzvb\" (UID: \"96c16cf0-31b6-4830-b92f-f25b4ce11979\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.075589 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.308876 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.313032 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24\" (UID: \"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:04:02 crc kubenswrapper[4717]: E0218 12:04:02.391201 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 18 12:04:02 crc kubenswrapper[4717]: E0218 12:04:02.391448 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bthlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-9cnsb_openstack-operators(f9800e95-aed6-4d9b-9e88-b6a5f303ee16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:04:02 crc kubenswrapper[4717]: E0218 12:04:02.392870 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" podUID="f9800e95-aed6-4d9b-9e88-b6a5f303ee16" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.467510 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.518466 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ngfkz"] Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.520651 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.526200 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngfkz"] Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.617242 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-catalog-content\") pod \"redhat-marketplace-ngfkz\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.617316 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqp6k\" (UniqueName: \"kubernetes.io/projected/e75253d6-ecc5-47ff-be33-9274290c65fb-kube-api-access-jqp6k\") pod \"redhat-marketplace-ngfkz\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.617346 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-utilities\") pod \"redhat-marketplace-ngfkz\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.718980 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-catalog-content\") pod \"redhat-marketplace-ngfkz\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.719039 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqp6k\" (UniqueName: \"kubernetes.io/projected/e75253d6-ecc5-47ff-be33-9274290c65fb-kube-api-access-jqp6k\") pod \"redhat-marketplace-ngfkz\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.719066 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-utilities\") pod \"redhat-marketplace-ngfkz\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.720072 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-utilities\") pod \"redhat-marketplace-ngfkz\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.720345 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-catalog-content\") pod \"redhat-marketplace-ngfkz\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.747553 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqp6k\" (UniqueName: \"kubernetes.io/projected/e75253d6-ecc5-47ff-be33-9274290c65fb-kube-api-access-jqp6k\") pod \"redhat-marketplace-ngfkz\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:02 crc kubenswrapper[4717]: E0218 12:04:02.839159 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" podUID="f9800e95-aed6-4d9b-9e88-b6a5f303ee16" Feb 18 12:04:02 crc kubenswrapper[4717]: I0218 12:04:02.848344 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:03 crc kubenswrapper[4717]: I0218 12:04:03.024241 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:04:03 crc kubenswrapper[4717]: I0218 12:04:03.024375 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:04:03 crc kubenswrapper[4717]: I0218 12:04:03.031209 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-metrics-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:04:03 crc kubenswrapper[4717]: I0218 12:04:03.043457 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa9ea26a-44d8-4c4d-8766-d1c19fa59d70-webhook-certs\") pod \"openstack-operator-controller-manager-846fd54586-rqvhv\" (UID: \"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70\") " pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:04:03 crc kubenswrapper[4717]: I0218 12:04:03.318679 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:04:03 crc kubenswrapper[4717]: E0218 12:04:03.390746 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 18 12:04:03 crc kubenswrapper[4717]: E0218 12:04:03.390955 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8l759,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-h59sk_openstack-operators(886d7474-df3b-4777-bd48-d3bf188f7fc9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:04:03 crc kubenswrapper[4717]: E0218 12:04:03.393016 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" podUID="886d7474-df3b-4777-bd48-d3bf188f7fc9" Feb 18 12:04:03 crc kubenswrapper[4717]: E0218 12:04:03.846856 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" podUID="886d7474-df3b-4777-bd48-d3bf188f7fc9" Feb 18 12:04:05 crc kubenswrapper[4717]: E0218 12:04:05.125021 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 18 12:04:05 crc kubenswrapper[4717]: E0218 12:04:05.125734 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xnjjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-2lmml_openstack-operators(8d4a2d32-4724-4580-a542-7552e580ed15): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:04:05 crc kubenswrapper[4717]: E0218 12:04:05.127701 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" podUID="8d4a2d32-4724-4580-a542-7552e580ed15" Feb 18 12:04:05 crc kubenswrapper[4717]: E0218 12:04:05.764311 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 18 12:04:05 crc kubenswrapper[4717]: E0218 12:04:05.764518 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hkc7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-xdtl8_openstack-operators(7c5e0309-c138-4668-bad9-eacff0124d24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:04:05 crc kubenswrapper[4717]: E0218 12:04:05.766382 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" podUID="7c5e0309-c138-4668-bad9-eacff0124d24" Feb 18 12:04:05 crc kubenswrapper[4717]: E0218 12:04:05.876826 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" podUID="7c5e0309-c138-4668-bad9-eacff0124d24" Feb 18 12:04:05 crc kubenswrapper[4717]: E0218 12:04:05.876938 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" podUID="8d4a2d32-4724-4580-a542-7552e580ed15" Feb 18 12:04:06 crc kubenswrapper[4717]: I0218 12:04:06.100065 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z4jwc"] Feb 18 12:04:06 crc kubenswrapper[4717]: I0218 12:04:06.258913 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l9p7h"] Feb 18 12:04:12 crc kubenswrapper[4717]: W0218 12:04:12.352464 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc5d174e_b9b0_46a9_972f_bcdc4ec99462.slice/crio-ee6ceaa35e6619f49c6bf159aca8d435ce80557fbab6011f80353fbf32ccf7cd WatchSource:0}: Error finding container ee6ceaa35e6619f49c6bf159aca8d435ce80557fbab6011f80353fbf32ccf7cd: Status 404 returned error can't find the container with id ee6ceaa35e6619f49c6bf159aca8d435ce80557fbab6011f80353fbf32ccf7cd Feb 18 12:04:12 crc kubenswrapper[4717]: W0218 12:04:12.354473 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35c40328_be7c_4470_80fe_6bdf6e254d8f.slice/crio-a76e77cf754315d923b6d6fcf43c1b90eaf679f87893e898758b935ea87d9071 WatchSource:0}: Error finding container a76e77cf754315d923b6d6fcf43c1b90eaf679f87893e898758b935ea87d9071: Status 404 returned error can't find the container with id a76e77cf754315d923b6d6fcf43c1b90eaf679f87893e898758b935ea87d9071 Feb 18 12:04:13 crc kubenswrapper[4717]: I0218 12:04:12.795037 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84h4d"] Feb 18 12:04:13 crc kubenswrapper[4717]: I0218 12:04:12.925613 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9p7h" event={"ID":"cc5d174e-b9b0-46a9-972f-bcdc4ec99462","Type":"ContainerStarted","Data":"ee6ceaa35e6619f49c6bf159aca8d435ce80557fbab6011f80353fbf32ccf7cd"} Feb 18 12:04:13 crc kubenswrapper[4717]: I0218 12:04:12.926593 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4jwc" event={"ID":"35c40328-be7c-4470-80fe-6bdf6e254d8f","Type":"ContainerStarted","Data":"a76e77cf754315d923b6d6fcf43c1b90eaf679f87893e898758b935ea87d9071"} Feb 18 12:04:13 crc kubenswrapper[4717]: I0218 12:04:13.934978 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84h4d" event={"ID":"68536fe0-b0a2-45b8-98d0-a387d8756183","Type":"ContainerStarted","Data":"b1cd5fff4dc51dc507ac5b3131f1e214b1770c950f785c2551aebf8a379d47ca"} Feb 18 12:04:14 crc kubenswrapper[4717]: I0218 12:04:14.432193 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv"] Feb 18 12:04:14 crc kubenswrapper[4717]: I0218 12:04:14.480018 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24"] Feb 18 12:04:14 crc kubenswrapper[4717]: I0218 12:04:14.577386 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngfkz"] Feb 18 12:04:14 crc kubenswrapper[4717]: W0218 12:04:14.985301 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode75253d6_ecc5_47ff_be33_9274290c65fb.slice/crio-11807b3b102ce3c330fb83ca813a1f0dce61b330ea19a43f8b43e713ce2057fa WatchSource:0}: Error finding container 11807b3b102ce3c330fb83ca813a1f0dce61b330ea19a43f8b43e713ce2057fa: Status 404 returned error can't find the container with id 11807b3b102ce3c330fb83ca813a1f0dce61b330ea19a43f8b43e713ce2057fa Feb 18 12:04:14 crc kubenswrapper[4717]: W0218 12:04:14.985663 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa9ea26a_44d8_4c4d_8766_d1c19fa59d70.slice/crio-8ad160e71de0dbe4831f06aac825004f019f863090eb54a76ea5eea9b750748f WatchSource:0}: Error finding container 8ad160e71de0dbe4831f06aac825004f019f863090eb54a76ea5eea9b750748f: Status 404 returned error can't find the container with id 8ad160e71de0dbe4831f06aac825004f019f863090eb54a76ea5eea9b750748f Feb 18 12:04:14 crc kubenswrapper[4717]: W0218 12:04:14.988772 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dc32a01_ea52_421a_8cca_d3a2d5d6e7fe.slice/crio-a9d0f7938bfb259e42c4d8e4c05eb30bfb40a1a964d48b6f461d6ee5017e18c1 WatchSource:0}: Error finding container a9d0f7938bfb259e42c4d8e4c05eb30bfb40a1a964d48b6f461d6ee5017e18c1: Status 404 returned error can't find the container with id a9d0f7938bfb259e42c4d8e4c05eb30bfb40a1a964d48b6f461d6ee5017e18c1 Feb 18 12:04:15 crc kubenswrapper[4717]: I0218 12:04:15.338739 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-szzvb"] Feb 18 12:04:15 crc kubenswrapper[4717]: I0218 12:04:15.954823 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r" event={"ID":"88c2fec0-988b-4496-b054-43f965e23324","Type":"ContainerStarted","Data":"fb09c4a9f712974a78c9956c46a905b8831a297c23420ce074ce7231c669c21d"} Feb 18 12:04:15 crc kubenswrapper[4717]: I0218 12:04:15.955327 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r" Feb 18 12:04:15 crc kubenswrapper[4717]: I0218 12:04:15.956232 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" event={"ID":"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe","Type":"ContainerStarted","Data":"a9d0f7938bfb259e42c4d8e4c05eb30bfb40a1a964d48b6f461d6ee5017e18c1"} Feb 18 12:04:15 crc kubenswrapper[4717]: I0218 12:04:15.957501 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2" event={"ID":"e82b0608-77fd-4e73-bafb-00a7b43b6299","Type":"ContainerStarted","Data":"f70d42cc151a2e6b603d9d48aefa3226809e67e3e7a456008d4921850b84c682"} Feb 18 12:04:15 crc kubenswrapper[4717]: I0218 12:04:15.957889 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2" Feb 18 12:04:15 crc kubenswrapper[4717]: I0218 12:04:15.986105 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" event={"ID":"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70","Type":"ContainerStarted","Data":"8ad160e71de0dbe4831f06aac825004f019f863090eb54a76ea5eea9b750748f"} Feb 18 12:04:16 crc kubenswrapper[4717]: I0218 12:04:16.001073 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngfkz" event={"ID":"e75253d6-ecc5-47ff-be33-9274290c65fb","Type":"ContainerStarted","Data":"11807b3b102ce3c330fb83ca813a1f0dce61b330ea19a43f8b43e713ce2057fa"} Feb 18 12:04:16 crc kubenswrapper[4717]: I0218 12:04:16.001327 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r" podStartSLOduration=12.37150748 podStartE2EDuration="30.001310552s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.121867967 +0000 UTC m=+862.523969283" lastFinishedPulling="2026-02-18 12:04:05.751671019 +0000 UTC m=+880.153772355" observedRunningTime="2026-02-18 12:04:15.972437791 +0000 UTC m=+890.374539127" watchObservedRunningTime="2026-02-18 12:04:16.001310552 +0000 UTC m=+890.403411878" Feb 18 12:04:16 crc kubenswrapper[4717]: I0218 12:04:16.002079 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2" podStartSLOduration=13.226267007 podStartE2EDuration="31.002072444s" podCreationTimestamp="2026-02-18 12:03:45 +0000 UTC" firstStartedPulling="2026-02-18 12:03:47.962363809 +0000 UTC m=+862.364465125" lastFinishedPulling="2026-02-18 12:04:05.738169246 +0000 UTC m=+880.140270562" observedRunningTime="2026-02-18 12:04:15.98782849 +0000 UTC m=+890.389929806" watchObservedRunningTime="2026-02-18 12:04:16.002072444 +0000 UTC m=+890.404173750" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.015402 4717 generic.go:334] "Generic (PLEG): container finished" podID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerID="d3530be2dfaca4cfc625002af6112eba3960960506d3dfd839c2f456510f5d30" exitCode=0 Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.016106 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4jwc" event={"ID":"35c40328-be7c-4470-80fe-6bdf6e254d8f","Type":"ContainerDied","Data":"d3530be2dfaca4cfc625002af6112eba3960960506d3dfd839c2f456510f5d30"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.025630 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" event={"ID":"3b988944-4f1b-4fb3-89ff-b1a0e61853dc","Type":"ContainerStarted","Data":"8ce68d7eb2706a946abfd7c5e6c7bdf2688d98ee3b1ce5f863617c9e25b06b45"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.026095 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.057364 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" event={"ID":"fa9ea26a-44d8-4c4d-8766-d1c19fa59d70","Type":"ContainerStarted","Data":"e411f701494d1cac8335784658e2bc98c347b897fcfdc7c08a9210749fae7673"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.057412 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx" event={"ID":"ba6c36b6-f446-4cc2-b1b0-9ebb6146dfde","Type":"ContainerStarted","Data":"a8489883e1efe284969d73d54b7142125270c1a6ed38db296bf7238e65c5a512"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.057431 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.057443 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.076771 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl" event={"ID":"eca115e0-882d-4173-a714-1883215088b5","Type":"ContainerStarted","Data":"6829a970c6c3ebdc073803839d3a9a4d24be66531315956c6f74a512dfb325b6"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.089942 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.112972 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2" event={"ID":"986ac762-6758-4402-a5c9-849780ff7fab","Type":"ContainerStarted","Data":"bbc89623b490e8608c1791e74edb46440141ab5531dc70215ee0c6f6e95b4fec"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.113116 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.124310 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" event={"ID":"089bc44f-bd8a-45b5-a497-17cfc2d38bee","Type":"ContainerStarted","Data":"98d9f2f2f59d8374295bc277af4d639b85be386092620142e6b5878e4cfd2001"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.134811 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" podStartSLOduration=3.704888291 podStartE2EDuration="31.134783153s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.518953013 +0000 UTC m=+862.921054329" lastFinishedPulling="2026-02-18 12:04:15.948847875 +0000 UTC m=+890.350949191" observedRunningTime="2026-02-18 12:04:17.126129321 +0000 UTC m=+891.528230637" watchObservedRunningTime="2026-02-18 12:04:17.134783153 +0000 UTC m=+891.536884469" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.141474 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc" event={"ID":"e95271e1-5edd-4862-9dd9-e7ad1feb0ed0","Type":"ContainerStarted","Data":"ce34438142f835699e415c6eaa931f4bf69830cb032ea62599d9c38b153eac4b"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.141546 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.147954 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj" event={"ID":"927be7f4-3bc1-42c8-917f-8b898bbbc21a","Type":"ContainerStarted","Data":"0687d921c0f965a64433791c0871099469a954ebdcb455b70ce913f0e472c976"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.148827 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.156516 4717 generic.go:334] "Generic (PLEG): container finished" podID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerID="b0d8ceaae7b697c1840c2d4145308415734ff5e32384884f94df69d8c39f2cab" exitCode=0 Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.156625 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngfkz" event={"ID":"e75253d6-ecc5-47ff-be33-9274290c65fb","Type":"ContainerDied","Data":"b0d8ceaae7b697c1840c2d4145308415734ff5e32384884f94df69d8c39f2cab"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.171542 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm" event={"ID":"3503ed6a-e486-404f-8ac3-df63d9d28c2d","Type":"ContainerStarted","Data":"89199c306a9c32eba765ac10630572d20fa1b9f825299975709161578c8d7cec"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.172358 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.196156 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx" podStartSLOduration=14.187681423 podStartE2EDuration="32.196124959s" podCreationTimestamp="2026-02-18 12:03:45 +0000 UTC" firstStartedPulling="2026-02-18 12:03:47.731561415 +0000 UTC m=+862.133662731" lastFinishedPulling="2026-02-18 12:04:05.740004951 +0000 UTC m=+880.142106267" observedRunningTime="2026-02-18 12:04:17.192879045 +0000 UTC m=+891.594980361" watchObservedRunningTime="2026-02-18 12:04:17.196124959 +0000 UTC m=+891.598226275" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.205613 4717 generic.go:334] "Generic (PLEG): container finished" podID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerID="342f4aff4d2c7df8bf03036e77f93a7067e95e405d476ce789c4869636548e6f" exitCode=0 Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.205750 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9p7h" event={"ID":"cc5d174e-b9b0-46a9-972f-bcdc4ec99462","Type":"ContainerDied","Data":"342f4aff4d2c7df8bf03036e77f93a7067e95e405d476ce789c4869636548e6f"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.220340 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh" event={"ID":"b6f768c0-a04d-49f7-a0ae-5ea5ee09c26b","Type":"ContainerStarted","Data":"f32b54843257cdcfb25dc21200dc678a08f51870461f665cd5cd8fe78dbc0f0b"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.220961 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.235341 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j" event={"ID":"a14214f1-4961-4ade-ba45-d48139b6fd0d","Type":"ContainerStarted","Data":"1d1ade0d707aa1a6af54d4bfc086f8ac5ee24dfc5389f04fa46aa7c8adf12291"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.235444 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.265042 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp" event={"ID":"5d07e1a5-0372-4721-ac7a-66c568e32be1","Type":"ContainerStarted","Data":"9a4bc6ae086a68205fdb43c9c6f36945834ea44ce8386d2a6e8371d0e669e152"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.265869 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.290249 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-9tf7v" event={"ID":"e2e22987-3a27-4550-8593-c54e5628e941","Type":"ContainerStarted","Data":"495a675702f5f6b938a9cbd610b20e5b0ca575c4d78d23a1fb30eed98f845fb7"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.291458 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-9tf7v" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.297056 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" podStartSLOduration=31.297031197 podStartE2EDuration="31.297031197s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:04:17.296898863 +0000 UTC m=+891.699000179" watchObservedRunningTime="2026-02-18 12:04:17.297031197 +0000 UTC m=+891.699132513" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.312997 4717 generic.go:334] "Generic (PLEG): container finished" podID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerID="6b28238baea08d6215064b062f1109423f655c44f51cd955ced748a09b126837" exitCode=0 Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.313174 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84h4d" event={"ID":"68536fe0-b0a2-45b8-98d0-a387d8756183","Type":"ContainerDied","Data":"6b28238baea08d6215064b062f1109423f655c44f51cd955ced748a09b126837"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.325655 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" event={"ID":"96c16cf0-31b6-4830-b92f-f25b4ce11979","Type":"ContainerStarted","Data":"2b4400a25039804d8930b3fcaeb8187f78dde5c595cc52b28794a3246204d6c0"} Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.488531 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh" podStartSLOduration=13.720181198 podStartE2EDuration="31.488503192s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:47.982876811 +0000 UTC m=+862.384978127" lastFinishedPulling="2026-02-18 12:04:05.751198815 +0000 UTC m=+880.153300121" observedRunningTime="2026-02-18 12:04:17.482633411 +0000 UTC m=+891.884734717" watchObservedRunningTime="2026-02-18 12:04:17.488503192 +0000 UTC m=+891.890604508" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.539990 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl" podStartSLOduration=14.225790254 podStartE2EDuration="32.53995954s" podCreationTimestamp="2026-02-18 12:03:45 +0000 UTC" firstStartedPulling="2026-02-18 12:03:47.436365309 +0000 UTC m=+861.838466625" lastFinishedPulling="2026-02-18 12:04:05.750534585 +0000 UTC m=+880.152635911" observedRunningTime="2026-02-18 12:04:17.535016106 +0000 UTC m=+891.937117422" watchObservedRunningTime="2026-02-18 12:04:17.53995954 +0000 UTC m=+891.942060846" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.573812 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j" podStartSLOduration=13.831863658 podStartE2EDuration="31.573789625s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.008372811 +0000 UTC m=+862.410474127" lastFinishedPulling="2026-02-18 12:04:05.750298778 +0000 UTC m=+880.152400094" observedRunningTime="2026-02-18 12:04:17.572068995 +0000 UTC m=+891.974170311" watchObservedRunningTime="2026-02-18 12:04:17.573789625 +0000 UTC m=+891.975890941" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.645159 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp" podStartSLOduration=13.904388131 podStartE2EDuration="31.645144603s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.010938998 +0000 UTC m=+862.413040314" lastFinishedPulling="2026-02-18 12:04:05.75169547 +0000 UTC m=+880.153796786" observedRunningTime="2026-02-18 12:04:17.638020345 +0000 UTC m=+892.040121661" watchObservedRunningTime="2026-02-18 12:04:17.645144603 +0000 UTC m=+892.047245919" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.806021 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm" podStartSLOduration=14.499999714 podStartE2EDuration="32.805994516s" podCreationTimestamp="2026-02-18 12:03:45 +0000 UTC" firstStartedPulling="2026-02-18 12:03:47.432454652 +0000 UTC m=+861.834555968" lastFinishedPulling="2026-02-18 12:04:05.738449454 +0000 UTC m=+880.140550770" observedRunningTime="2026-02-18 12:04:17.727044447 +0000 UTC m=+892.129145763" watchObservedRunningTime="2026-02-18 12:04:17.805994516 +0000 UTC m=+892.208095832" Feb 18 12:04:17 crc kubenswrapper[4717]: I0218 12:04:17.979043 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj" podStartSLOduration=14.503783354 podStartE2EDuration="31.979021064s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.262938776 +0000 UTC m=+862.665040082" lastFinishedPulling="2026-02-18 12:04:05.738176476 +0000 UTC m=+880.140277792" observedRunningTime="2026-02-18 12:04:17.978231821 +0000 UTC m=+892.380333157" watchObservedRunningTime="2026-02-18 12:04:17.979021064 +0000 UTC m=+892.381122380" Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.038204 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jrzq4" podStartSLOduration=4.634937579 podStartE2EDuration="32.038185176s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.545499845 +0000 UTC m=+862.947601161" lastFinishedPulling="2026-02-18 12:04:15.948747442 +0000 UTC m=+890.350848758" observedRunningTime="2026-02-18 12:04:18.03041075 +0000 UTC m=+892.432512086" watchObservedRunningTime="2026-02-18 12:04:18.038185176 +0000 UTC m=+892.440286492" Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.128310 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-9tf7v" podStartSLOduration=14.90383969 podStartE2EDuration="32.128239798s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.515532741 +0000 UTC m=+862.917634057" lastFinishedPulling="2026-02-18 12:04:05.739932849 +0000 UTC m=+880.142034165" observedRunningTime="2026-02-18 12:04:18.109078131 +0000 UTC m=+892.511179447" watchObservedRunningTime="2026-02-18 12:04:18.128239798 +0000 UTC m=+892.530341114" Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.146110 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc" podStartSLOduration=14.867687179 podStartE2EDuration="32.146078488s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.4615437 +0000 UTC m=+862.863645016" lastFinishedPulling="2026-02-18 12:04:05.739935009 +0000 UTC m=+880.142036325" observedRunningTime="2026-02-18 12:04:18.067557862 +0000 UTC m=+892.469659178" watchObservedRunningTime="2026-02-18 12:04:18.146078488 +0000 UTC m=+892.548179804" Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.271634 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2" podStartSLOduration=14.784079204 podStartE2EDuration="33.271609303s" podCreationTimestamp="2026-02-18 12:03:45 +0000 UTC" firstStartedPulling="2026-02-18 12:03:47.250586536 +0000 UTC m=+861.652687852" lastFinishedPulling="2026-02-18 12:04:05.738116635 +0000 UTC m=+880.140217951" observedRunningTime="2026-02-18 12:04:18.197229597 +0000 UTC m=+892.599330923" watchObservedRunningTime="2026-02-18 12:04:18.271609303 +0000 UTC m=+892.673710619" Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.400668 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" event={"ID":"f9800e95-aed6-4d9b-9e88-b6a5f303ee16","Type":"ContainerStarted","Data":"84556ca43ba7c9fe48098fdd495a1bcff6c25e19a4081cb36942171f2fbf8474"} Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.401296 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.419996 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" event={"ID":"3faac3ae-2788-4a36-8241-09a601267885","Type":"ContainerStarted","Data":"9cfb2115339ac232cb63aad441c1160f9b969ffa9ffa4392b99c2fde86e3602f"} Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.420747 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.442305 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" event={"ID":"196844a3-3220-4557-93a1-dc0887bbb53f","Type":"ContainerStarted","Data":"9f28dfd827b859b144b2a383a09fe32095d99729396d7e8d761e3f011dee9042"} Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.442790 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.456630 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" podStartSLOduration=4.9027037490000005 podStartE2EDuration="33.456602169s" podCreationTimestamp="2026-02-18 12:03:45 +0000 UTC" firstStartedPulling="2026-02-18 12:03:47.747126339 +0000 UTC m=+862.149227655" lastFinishedPulling="2026-02-18 12:04:16.301024759 +0000 UTC m=+890.703126075" observedRunningTime="2026-02-18 12:04:18.428027517 +0000 UTC m=+892.830128833" watchObservedRunningTime="2026-02-18 12:04:18.456602169 +0000 UTC m=+892.858703485" Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.526970 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" podStartSLOduration=4.993706323 podStartE2EDuration="32.526953057s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.528872949 +0000 UTC m=+862.930974265" lastFinishedPulling="2026-02-18 12:04:16.062119683 +0000 UTC m=+890.464220999" observedRunningTime="2026-02-18 12:04:18.491081553 +0000 UTC m=+892.893182869" watchObservedRunningTime="2026-02-18 12:04:18.526953057 +0000 UTC m=+892.929054373" Feb 18 12:04:18 crc kubenswrapper[4717]: I0218 12:04:18.530419 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" podStartSLOduration=4.738754363 podStartE2EDuration="32.530408138s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.518924502 +0000 UTC m=+862.921025808" lastFinishedPulling="2026-02-18 12:04:16.310578267 +0000 UTC m=+890.712679583" observedRunningTime="2026-02-18 12:04:18.525816954 +0000 UTC m=+892.927918270" watchObservedRunningTime="2026-02-18 12:04:18.530408138 +0000 UTC m=+892.932509454" Feb 18 12:04:19 crc kubenswrapper[4717]: I0218 12:04:19.453215 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" event={"ID":"886d7474-df3b-4777-bd48-d3bf188f7fc9","Type":"ContainerStarted","Data":"aec2edf6e6a76403b82149f49a758a786409c9c2ba442385fbafec3ed0a2fa2e"} Feb 18 12:04:19 crc kubenswrapper[4717]: I0218 12:04:19.454892 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" Feb 18 12:04:19 crc kubenswrapper[4717]: I0218 12:04:19.459694 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9p7h" event={"ID":"cc5d174e-b9b0-46a9-972f-bcdc4ec99462","Type":"ContainerStarted","Data":"8d12f8e248190ec4c8ce2cb9f35098340a4dc62340be4963d7439c4eb2620661"} Feb 18 12:04:19 crc kubenswrapper[4717]: I0218 12:04:19.465455 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84h4d" event={"ID":"68536fe0-b0a2-45b8-98d0-a387d8756183","Type":"ContainerStarted","Data":"3847728f267fea2e7fcab672e747a244ede4f1b60ec59decff23bb4cf03d48de"} Feb 18 12:04:19 crc kubenswrapper[4717]: I0218 12:04:19.468953 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" event={"ID":"8d4a2d32-4724-4580-a542-7552e580ed15","Type":"ContainerStarted","Data":"22cd2047d298d621e091c4c66e63bf0ca715a71a7c2bbc07af9d277e6b841c27"} Feb 18 12:04:19 crc kubenswrapper[4717]: I0218 12:04:19.469504 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" Feb 18 12:04:19 crc kubenswrapper[4717]: I0218 12:04:19.476311 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4jwc" event={"ID":"35c40328-be7c-4470-80fe-6bdf6e254d8f","Type":"ContainerStarted","Data":"af7bfc86ee60d7c6122d7b702a0eb7a3bd5ab33b3b6f16a489ee3de878340339"} Feb 18 12:04:19 crc kubenswrapper[4717]: I0218 12:04:19.483055 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" podStartSLOduration=4.293136637 podStartE2EDuration="33.483040084s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.515900352 +0000 UTC m=+862.918001668" lastFinishedPulling="2026-02-18 12:04:17.705803809 +0000 UTC m=+892.107905115" observedRunningTime="2026-02-18 12:04:19.482516829 +0000 UTC m=+893.884618155" watchObservedRunningTime="2026-02-18 12:04:19.483040084 +0000 UTC m=+893.885141400" Feb 18 12:04:19 crc kubenswrapper[4717]: I0218 12:04:19.488724 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngfkz" event={"ID":"e75253d6-ecc5-47ff-be33-9274290c65fb","Type":"ContainerStarted","Data":"81b553e6a637778e6e559f602d983472af21ea8c1ffe23a20990c5821542968e"} Feb 18 12:04:19 crc kubenswrapper[4717]: I0218 12:04:19.572829 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" podStartSLOduration=3.8810777610000002 podStartE2EDuration="33.572808658s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.014669599 +0000 UTC m=+862.416770915" lastFinishedPulling="2026-02-18 12:04:17.706400496 +0000 UTC m=+892.108501812" observedRunningTime="2026-02-18 12:04:19.565147445 +0000 UTC m=+893.967248761" watchObservedRunningTime="2026-02-18 12:04:19.572808658 +0000 UTC m=+893.974909964" Feb 18 12:04:20 crc kubenswrapper[4717]: I0218 12:04:20.682489 4717 generic.go:334] "Generic (PLEG): container finished" podID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerID="81b553e6a637778e6e559f602d983472af21ea8c1ffe23a20990c5821542968e" exitCode=0 Feb 18 12:04:20 crc kubenswrapper[4717]: I0218 12:04:20.683126 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngfkz" event={"ID":"e75253d6-ecc5-47ff-be33-9274290c65fb","Type":"ContainerDied","Data":"81b553e6a637778e6e559f602d983472af21ea8c1ffe23a20990c5821542968e"} Feb 18 12:04:20 crc kubenswrapper[4717]: I0218 12:04:20.688234 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84h4d" event={"ID":"68536fe0-b0a2-45b8-98d0-a387d8756183","Type":"ContainerDied","Data":"3847728f267fea2e7fcab672e747a244ede4f1b60ec59decff23bb4cf03d48de"} Feb 18 12:04:20 crc kubenswrapper[4717]: I0218 12:04:20.688390 4717 generic.go:334] "Generic (PLEG): container finished" podID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerID="3847728f267fea2e7fcab672e747a244ede4f1b60ec59decff23bb4cf03d48de" exitCode=0 Feb 18 12:04:20 crc kubenswrapper[4717]: I0218 12:04:20.695226 4717 generic.go:334] "Generic (PLEG): container finished" podID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerID="af7bfc86ee60d7c6122d7b702a0eb7a3bd5ab33b3b6f16a489ee3de878340339" exitCode=0 Feb 18 12:04:20 crc kubenswrapper[4717]: I0218 12:04:20.697031 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4jwc" event={"ID":"35c40328-be7c-4470-80fe-6bdf6e254d8f","Type":"ContainerDied","Data":"af7bfc86ee60d7c6122d7b702a0eb7a3bd5ab33b3b6f16a489ee3de878340339"} Feb 18 12:04:21 crc kubenswrapper[4717]: I0218 12:04:21.710795 4717 generic.go:334] "Generic (PLEG): container finished" podID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerID="8d12f8e248190ec4c8ce2cb9f35098340a4dc62340be4963d7439c4eb2620661" exitCode=0 Feb 18 12:04:21 crc kubenswrapper[4717]: I0218 12:04:21.710963 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9p7h" event={"ID":"cc5d174e-b9b0-46a9-972f-bcdc4ec99462","Type":"ContainerDied","Data":"8d12f8e248190ec4c8ce2cb9f35098340a4dc62340be4963d7439c4eb2620661"} Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.724142 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" event={"ID":"6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe","Type":"ContainerStarted","Data":"fa42482fec8c144fec458d7390c36301589c9e7e08e9aed9601c7a66dbd1d46c"} Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.724614 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.726229 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4jwc" event={"ID":"35c40328-be7c-4470-80fe-6bdf6e254d8f","Type":"ContainerStarted","Data":"d1f0df6c90c4da38b2fb7b388934684d99e9b7f1c3a1094319c9b949c8f82acc"} Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.728665 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" event={"ID":"7c5e0309-c138-4668-bad9-eacff0124d24","Type":"ContainerStarted","Data":"3f99d5cad41ed33fee33926e5870c0ae95a938efa95c3c55e8bf5196d5917f5e"} Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.728865 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.730566 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngfkz" event={"ID":"e75253d6-ecc5-47ff-be33-9274290c65fb","Type":"ContainerStarted","Data":"9668b67ec677113e998d515266bd58b1c9cd2d0039c749e8bf607ecd786ce09b"} Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.731732 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" event={"ID":"96c16cf0-31b6-4830-b92f-f25b4ce11979","Type":"ContainerStarted","Data":"eedece81400e6d116b9444c020db341e92f5b0a5d2e132339de3132704083f8c"} Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.732039 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.770456 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" podStartSLOduration=29.494309301 podStartE2EDuration="36.770438088s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:04:15.022168584 +0000 UTC m=+889.424269900" lastFinishedPulling="2026-02-18 12:04:22.298297371 +0000 UTC m=+896.700398687" observedRunningTime="2026-02-18 12:04:22.768045049 +0000 UTC m=+897.170146365" watchObservedRunningTime="2026-02-18 12:04:22.770438088 +0000 UTC m=+897.172539394" Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.849631 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.849682 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.852595 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z4jwc" podStartSLOduration=28.590068849 podStartE2EDuration="33.85257476s" podCreationTimestamp="2026-02-18 12:03:49 +0000 UTC" firstStartedPulling="2026-02-18 12:04:17.022437442 +0000 UTC m=+891.424538758" lastFinishedPulling="2026-02-18 12:04:22.284943353 +0000 UTC m=+896.687044669" observedRunningTime="2026-02-18 12:04:22.813676437 +0000 UTC m=+897.215777763" watchObservedRunningTime="2026-02-18 12:04:22.85257476 +0000 UTC m=+897.254676076" Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.879518 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ngfkz" podStartSLOduration=15.741385274 podStartE2EDuration="20.879490493s" podCreationTimestamp="2026-02-18 12:04:02 +0000 UTC" firstStartedPulling="2026-02-18 12:04:17.160147041 +0000 UTC m=+891.562248357" lastFinishedPulling="2026-02-18 12:04:22.29825226 +0000 UTC m=+896.700353576" observedRunningTime="2026-02-18 12:04:22.847727179 +0000 UTC m=+897.249828495" watchObservedRunningTime="2026-02-18 12:04:22.879490493 +0000 UTC m=+897.281591819" Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.880021 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" podStartSLOduration=2.746252001 podStartE2EDuration="36.880014319s" podCreationTimestamp="2026-02-18 12:03:46 +0000 UTC" firstStartedPulling="2026-02-18 12:03:48.141685248 +0000 UTC m=+862.543786564" lastFinishedPulling="2026-02-18 12:04:22.275447566 +0000 UTC m=+896.677548882" observedRunningTime="2026-02-18 12:04:22.877851306 +0000 UTC m=+897.279952622" watchObservedRunningTime="2026-02-18 12:04:22.880014319 +0000 UTC m=+897.282115635" Feb 18 12:04:22 crc kubenswrapper[4717]: I0218 12:04:22.894920 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" podStartSLOduration=31.608601574 podStartE2EDuration="37.894903942s" podCreationTimestamp="2026-02-18 12:03:45 +0000 UTC" firstStartedPulling="2026-02-18 12:04:15.979571109 +0000 UTC m=+890.381672425" lastFinishedPulling="2026-02-18 12:04:22.265873487 +0000 UTC m=+896.667974793" observedRunningTime="2026-02-18 12:04:22.893775259 +0000 UTC m=+897.295876575" watchObservedRunningTime="2026-02-18 12:04:22.894903942 +0000 UTC m=+897.297005258" Feb 18 12:04:23 crc kubenswrapper[4717]: I0218 12:04:23.327305 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-846fd54586-rqvhv" Feb 18 12:04:23 crc kubenswrapper[4717]: I0218 12:04:23.779683 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9p7h" event={"ID":"cc5d174e-b9b0-46a9-972f-bcdc4ec99462","Type":"ContainerStarted","Data":"553e9cab2057b99bfae8eec44f111798ab8fa27c82ab67418e06d26b56672a7d"} Feb 18 12:04:23 crc kubenswrapper[4717]: I0218 12:04:23.785761 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84h4d" event={"ID":"68536fe0-b0a2-45b8-98d0-a387d8756183","Type":"ContainerStarted","Data":"c8a4f10b06fcb6b9cfc4070791f5e899d9666e64df924fde795a517503768bad"} Feb 18 12:04:23 crc kubenswrapper[4717]: I0218 12:04:23.810909 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l9p7h" podStartSLOduration=26.253959728 podStartE2EDuration="31.810888371s" podCreationTimestamp="2026-02-18 12:03:52 +0000 UTC" firstStartedPulling="2026-02-18 12:04:17.207697606 +0000 UTC m=+891.609798912" lastFinishedPulling="2026-02-18 12:04:22.764626239 +0000 UTC m=+897.166727555" observedRunningTime="2026-02-18 12:04:23.807825132 +0000 UTC m=+898.209926458" watchObservedRunningTime="2026-02-18 12:04:23.810888371 +0000 UTC m=+898.212989687" Feb 18 12:04:23 crc kubenswrapper[4717]: I0218 12:04:23.834759 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-84h4d" podStartSLOduration=24.865806602 podStartE2EDuration="29.834739345s" podCreationTimestamp="2026-02-18 12:03:54 +0000 UTC" firstStartedPulling="2026-02-18 12:04:17.321484199 +0000 UTC m=+891.723585515" lastFinishedPulling="2026-02-18 12:04:22.290416942 +0000 UTC m=+896.692518258" observedRunningTime="2026-02-18 12:04:23.831426669 +0000 UTC m=+898.233527975" watchObservedRunningTime="2026-02-18 12:04:23.834739345 +0000 UTC m=+898.236840661" Feb 18 12:04:23 crc kubenswrapper[4717]: I0218 12:04:23.932691 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ngfkz" podUID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerName="registry-server" probeResult="failure" output=< Feb 18 12:04:23 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 12:04:23 crc kubenswrapper[4717]: > Feb 18 12:04:24 crc kubenswrapper[4717]: I0218 12:04:24.514655 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:04:24 crc kubenswrapper[4717]: I0218 12:04:24.515163 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:04:25 crc kubenswrapper[4717]: I0218 12:04:25.565065 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-84h4d" podUID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerName="registry-server" probeResult="failure" output=< Feb 18 12:04:25 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 12:04:25 crc kubenswrapper[4717]: > Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.172929 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5lvkl" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.195466 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-jhnvm" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.244848 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-2n2t2" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.352788 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-v6qrx" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.386100 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-rvxd2" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.393472 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9cnsb" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.497055 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-ldqdh" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.580789 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-2lmml" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.628623 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-nljkj" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.657806 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-sl49j" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.677080 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-52fmp" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.885162 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-n7n5r" Feb 18 12:04:26 crc kubenswrapper[4717]: I0218 12:04:26.964397 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-lx8wj" Feb 18 12:04:27 crc kubenswrapper[4717]: I0218 12:04:27.002618 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-j9pkf" Feb 18 12:04:27 crc kubenswrapper[4717]: I0218 12:04:27.057860 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-26mhv" Feb 18 12:04:27 crc kubenswrapper[4717]: I0218 12:04:27.108939 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-cmrvc" Feb 18 12:04:27 crc kubenswrapper[4717]: I0218 12:04:27.255760 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-9tf7v" Feb 18 12:04:27 crc kubenswrapper[4717]: I0218 12:04:27.349846 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-h59sk" Feb 18 12:04:30 crc kubenswrapper[4717]: I0218 12:04:30.170331 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:04:30 crc kubenswrapper[4717]: I0218 12:04:30.170967 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:04:30 crc kubenswrapper[4717]: I0218 12:04:30.214576 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:04:30 crc kubenswrapper[4717]: I0218 12:04:30.885613 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:04:30 crc kubenswrapper[4717]: I0218 12:04:30.939942 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4jwc"] Feb 18 12:04:32 crc kubenswrapper[4717]: I0218 12:04:32.083379 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szzvb" Feb 18 12:04:32 crc kubenswrapper[4717]: I0218 12:04:32.475207 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24" Feb 18 12:04:32 crc kubenswrapper[4717]: I0218 12:04:32.740509 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:04:32 crc kubenswrapper[4717]: I0218 12:04:32.740562 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:04:32 crc kubenswrapper[4717]: I0218 12:04:32.789498 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:04:32 crc kubenswrapper[4717]: I0218 12:04:32.852625 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z4jwc" podUID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerName="registry-server" containerID="cri-o://d1f0df6c90c4da38b2fb7b388934684d99e9b7f1c3a1094319c9b949c8f82acc" gracePeriod=2 Feb 18 12:04:32 crc kubenswrapper[4717]: I0218 12:04:32.902184 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:04:32 crc kubenswrapper[4717]: I0218 12:04:32.908436 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:32 crc kubenswrapper[4717]: I0218 12:04:32.958626 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:33 crc kubenswrapper[4717]: I0218 12:04:33.860935 4717 generic.go:334] "Generic (PLEG): container finished" podID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerID="d1f0df6c90c4da38b2fb7b388934684d99e9b7f1c3a1094319c9b949c8f82acc" exitCode=0 Feb 18 12:04:33 crc kubenswrapper[4717]: I0218 12:04:33.861025 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4jwc" event={"ID":"35c40328-be7c-4470-80fe-6bdf6e254d8f","Type":"ContainerDied","Data":"d1f0df6c90c4da38b2fb7b388934684d99e9b7f1c3a1094319c9b949c8f82acc"} Feb 18 12:04:34 crc kubenswrapper[4717]: I0218 12:04:34.565109 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:04:34 crc kubenswrapper[4717]: I0218 12:04:34.613796 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:04:34 crc kubenswrapper[4717]: I0218 12:04:34.645709 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9p7h"] Feb 18 12:04:34 crc kubenswrapper[4717]: I0218 12:04:34.867598 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l9p7h" podUID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerName="registry-server" containerID="cri-o://553e9cab2057b99bfae8eec44f111798ab8fa27c82ab67418e06d26b56672a7d" gracePeriod=2 Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.246368 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngfkz"] Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.246649 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ngfkz" podUID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerName="registry-server" containerID="cri-o://9668b67ec677113e998d515266bd58b1c9cd2d0039c749e8bf607ecd786ce09b" gracePeriod=2 Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.885292 4717 generic.go:334] "Generic (PLEG): container finished" podID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerID="9668b67ec677113e998d515266bd58b1c9cd2d0039c749e8bf607ecd786ce09b" exitCode=0 Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.885343 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngfkz" event={"ID":"e75253d6-ecc5-47ff-be33-9274290c65fb","Type":"ContainerDied","Data":"9668b67ec677113e998d515266bd58b1c9cd2d0039c749e8bf607ecd786ce09b"} Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.888676 4717 generic.go:334] "Generic (PLEG): container finished" podID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerID="553e9cab2057b99bfae8eec44f111798ab8fa27c82ab67418e06d26b56672a7d" exitCode=0 Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.888724 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9p7h" event={"ID":"cc5d174e-b9b0-46a9-972f-bcdc4ec99462","Type":"ContainerDied","Data":"553e9cab2057b99bfae8eec44f111798ab8fa27c82ab67418e06d26b56672a7d"} Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.981107 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.998373 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-utilities\") pod \"35c40328-be7c-4470-80fe-6bdf6e254d8f\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.998437 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2vvr\" (UniqueName: \"kubernetes.io/projected/35c40328-be7c-4470-80fe-6bdf6e254d8f-kube-api-access-r2vvr\") pod \"35c40328-be7c-4470-80fe-6bdf6e254d8f\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.998644 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-catalog-content\") pod \"35c40328-be7c-4470-80fe-6bdf6e254d8f\" (UID: \"35c40328-be7c-4470-80fe-6bdf6e254d8f\") " Feb 18 12:04:35 crc kubenswrapper[4717]: I0218 12:04:35.999575 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-utilities" (OuterVolumeSpecName: "utilities") pod "35c40328-be7c-4470-80fe-6bdf6e254d8f" (UID: "35c40328-be7c-4470-80fe-6bdf6e254d8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.009872 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c40328-be7c-4470-80fe-6bdf6e254d8f-kube-api-access-r2vvr" (OuterVolumeSpecName: "kube-api-access-r2vvr") pod "35c40328-be7c-4470-80fe-6bdf6e254d8f" (UID: "35c40328-be7c-4470-80fe-6bdf6e254d8f"). InnerVolumeSpecName "kube-api-access-r2vvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.050481 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35c40328-be7c-4470-80fe-6bdf6e254d8f" (UID: "35c40328-be7c-4470-80fe-6bdf6e254d8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.056827 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.100617 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqp6k\" (UniqueName: \"kubernetes.io/projected/e75253d6-ecc5-47ff-be33-9274290c65fb-kube-api-access-jqp6k\") pod \"e75253d6-ecc5-47ff-be33-9274290c65fb\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.100789 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-catalog-content\") pod \"e75253d6-ecc5-47ff-be33-9274290c65fb\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.102293 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-utilities\") pod \"e75253d6-ecc5-47ff-be33-9274290c65fb\" (UID: \"e75253d6-ecc5-47ff-be33-9274290c65fb\") " Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.103108 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-utilities" (OuterVolumeSpecName: "utilities") pod "e75253d6-ecc5-47ff-be33-9274290c65fb" (UID: "e75253d6-ecc5-47ff-be33-9274290c65fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.103155 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2vvr\" (UniqueName: \"kubernetes.io/projected/35c40328-be7c-4470-80fe-6bdf6e254d8f-kube-api-access-r2vvr\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.103175 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.103188 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c40328-be7c-4470-80fe-6bdf6e254d8f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.105780 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75253d6-ecc5-47ff-be33-9274290c65fb-kube-api-access-jqp6k" (OuterVolumeSpecName: "kube-api-access-jqp6k") pod "e75253d6-ecc5-47ff-be33-9274290c65fb" (UID: "e75253d6-ecc5-47ff-be33-9274290c65fb"). InnerVolumeSpecName "kube-api-access-jqp6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.128317 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e75253d6-ecc5-47ff-be33-9274290c65fb" (UID: "e75253d6-ecc5-47ff-be33-9274290c65fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.207697 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.207746 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqp6k\" (UniqueName: \"kubernetes.io/projected/e75253d6-ecc5-47ff-be33-9274290c65fb-kube-api-access-jqp6k\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.207763 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75253d6-ecc5-47ff-be33-9274290c65fb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.543782 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.613035 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-catalog-content\") pod \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.613161 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-utilities\") pod \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.613214 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwtml\" (UniqueName: \"kubernetes.io/projected/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-kube-api-access-bwtml\") pod \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\" (UID: \"cc5d174e-b9b0-46a9-972f-bcdc4ec99462\") " Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.615446 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-utilities" (OuterVolumeSpecName: "utilities") pod "cc5d174e-b9b0-46a9-972f-bcdc4ec99462" (UID: "cc5d174e-b9b0-46a9-972f-bcdc4ec99462"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.618286 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-kube-api-access-bwtml" (OuterVolumeSpecName: "kube-api-access-bwtml") pod "cc5d174e-b9b0-46a9-972f-bcdc4ec99462" (UID: "cc5d174e-b9b0-46a9-972f-bcdc4ec99462"). InnerVolumeSpecName "kube-api-access-bwtml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.715638 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.715667 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwtml\" (UniqueName: \"kubernetes.io/projected/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-kube-api-access-bwtml\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.736829 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc5d174e-b9b0-46a9-972f-bcdc4ec99462" (UID: "cc5d174e-b9b0-46a9-972f-bcdc4ec99462"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.773695 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xdtl8" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.817556 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5d174e-b9b0-46a9-972f-bcdc4ec99462-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.896795 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z4jwc" event={"ID":"35c40328-be7c-4470-80fe-6bdf6e254d8f","Type":"ContainerDied","Data":"a76e77cf754315d923b6d6fcf43c1b90eaf679f87893e898758b935ea87d9071"} Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.896859 4717 scope.go:117] "RemoveContainer" containerID="d1f0df6c90c4da38b2fb7b388934684d99e9b7f1c3a1094319c9b949c8f82acc" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.896874 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z4jwc" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.900419 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngfkz" event={"ID":"e75253d6-ecc5-47ff-be33-9274290c65fb","Type":"ContainerDied","Data":"11807b3b102ce3c330fb83ca813a1f0dce61b330ea19a43f8b43e713ce2057fa"} Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.900511 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngfkz" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.903122 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l9p7h" event={"ID":"cc5d174e-b9b0-46a9-972f-bcdc4ec99462","Type":"ContainerDied","Data":"ee6ceaa35e6619f49c6bf159aca8d435ce80557fbab6011f80353fbf32ccf7cd"} Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.903227 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l9p7h" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.924047 4717 scope.go:117] "RemoveContainer" containerID="af7bfc86ee60d7c6122d7b702a0eb7a3bd5ab33b3b6f16a489ee3de878340339" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.939742 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z4jwc"] Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.945216 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z4jwc"] Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.954945 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l9p7h"] Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.960985 4717 scope.go:117] "RemoveContainer" containerID="d3530be2dfaca4cfc625002af6112eba3960960506d3dfd839c2f456510f5d30" Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.963046 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l9p7h"] Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.972855 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngfkz"] Feb 18 12:04:36 crc kubenswrapper[4717]: I0218 12:04:36.991226 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngfkz"] Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.002089 4717 scope.go:117] "RemoveContainer" containerID="9668b67ec677113e998d515266bd58b1c9cd2d0039c749e8bf607ecd786ce09b" Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.027760 4717 scope.go:117] "RemoveContainer" containerID="81b553e6a637778e6e559f602d983472af21ea8c1ffe23a20990c5821542968e" Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.056213 4717 scope.go:117] "RemoveContainer" containerID="b0d8ceaae7b697c1840c2d4145308415734ff5e32384884f94df69d8c39f2cab" Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.057005 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c40328-be7c-4470-80fe-6bdf6e254d8f" path="/var/lib/kubelet/pods/35c40328-be7c-4470-80fe-6bdf6e254d8f/volumes" Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.058005 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" path="/var/lib/kubelet/pods/cc5d174e-b9b0-46a9-972f-bcdc4ec99462/volumes" Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.058997 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75253d6-ecc5-47ff-be33-9274290c65fb" path="/var/lib/kubelet/pods/e75253d6-ecc5-47ff-be33-9274290c65fb/volumes" Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.060661 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84h4d"] Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.060915 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-84h4d" podUID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerName="registry-server" containerID="cri-o://c8a4f10b06fcb6b9cfc4070791f5e899d9666e64df924fde795a517503768bad" gracePeriod=2 Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.072442 4717 scope.go:117] "RemoveContainer" containerID="553e9cab2057b99bfae8eec44f111798ab8fa27c82ab67418e06d26b56672a7d" Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.087152 4717 scope.go:117] "RemoveContainer" containerID="8d12f8e248190ec4c8ce2cb9f35098340a4dc62340be4963d7439c4eb2620661" Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.110897 4717 scope.go:117] "RemoveContainer" containerID="342f4aff4d2c7df8bf03036e77f93a7067e95e405d476ce789c4869636548e6f" Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.911600 4717 generic.go:334] "Generic (PLEG): container finished" podID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerID="c8a4f10b06fcb6b9cfc4070791f5e899d9666e64df924fde795a517503768bad" exitCode=0 Feb 18 12:04:37 crc kubenswrapper[4717]: I0218 12:04:37.911669 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84h4d" event={"ID":"68536fe0-b0a2-45b8-98d0-a387d8756183","Type":"ContainerDied","Data":"c8a4f10b06fcb6b9cfc4070791f5e899d9666e64df924fde795a517503768bad"} Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.304417 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.340687 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-utilities\") pod \"68536fe0-b0a2-45b8-98d0-a387d8756183\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.340837 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-catalog-content\") pod \"68536fe0-b0a2-45b8-98d0-a387d8756183\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.340891 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-857d5\" (UniqueName: \"kubernetes.io/projected/68536fe0-b0a2-45b8-98d0-a387d8756183-kube-api-access-857d5\") pod \"68536fe0-b0a2-45b8-98d0-a387d8756183\" (UID: \"68536fe0-b0a2-45b8-98d0-a387d8756183\") " Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.341654 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-utilities" (OuterVolumeSpecName: "utilities") pod "68536fe0-b0a2-45b8-98d0-a387d8756183" (UID: "68536fe0-b0a2-45b8-98d0-a387d8756183"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.347469 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68536fe0-b0a2-45b8-98d0-a387d8756183-kube-api-access-857d5" (OuterVolumeSpecName: "kube-api-access-857d5") pod "68536fe0-b0a2-45b8-98d0-a387d8756183" (UID: "68536fe0-b0a2-45b8-98d0-a387d8756183"). InnerVolumeSpecName "kube-api-access-857d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.400230 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68536fe0-b0a2-45b8-98d0-a387d8756183" (UID: "68536fe0-b0a2-45b8-98d0-a387d8756183"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.442863 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.443436 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68536fe0-b0a2-45b8-98d0-a387d8756183-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.443452 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-857d5\" (UniqueName: \"kubernetes.io/projected/68536fe0-b0a2-45b8-98d0-a387d8756183-kube-api-access-857d5\") on node \"crc\" DevicePath \"\"" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.928778 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84h4d" event={"ID":"68536fe0-b0a2-45b8-98d0-a387d8756183","Type":"ContainerDied","Data":"b1cd5fff4dc51dc507ac5b3131f1e214b1770c950f785c2551aebf8a379d47ca"} Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.928847 4717 scope.go:117] "RemoveContainer" containerID="c8a4f10b06fcb6b9cfc4070791f5e899d9666e64df924fde795a517503768bad" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.928980 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84h4d" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.959572 4717 scope.go:117] "RemoveContainer" containerID="3847728f267fea2e7fcab672e747a244ede4f1b60ec59decff23bb4cf03d48de" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.989339 4717 scope.go:117] "RemoveContainer" containerID="6b28238baea08d6215064b062f1109423f655c44f51cd955ced748a09b126837" Feb 18 12:04:38 crc kubenswrapper[4717]: I0218 12:04:38.997198 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84h4d"] Feb 18 12:04:39 crc kubenswrapper[4717]: I0218 12:04:39.004482 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-84h4d"] Feb 18 12:04:39 crc kubenswrapper[4717]: I0218 12:04:39.051710 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68536fe0-b0a2-45b8-98d0-a387d8756183" path="/var/lib/kubelet/pods/68536fe0-b0a2-45b8-98d0-a387d8756183/volumes" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.674044 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k9qjk"] Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.678000 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.678114 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.678207 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerName="extract-utilities" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.678314 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerName="extract-utilities" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.678409 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerName="extract-utilities" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.678489 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerName="extract-utilities" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.678571 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerName="extract-content" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.678648 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerName="extract-content" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.678731 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.678801 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.678877 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.678950 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.679024 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerName="extract-utilities" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.679089 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerName="extract-utilities" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.679163 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerName="extract-content" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.679230 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerName="extract-content" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.679340 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.679418 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.679487 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerName="extract-content" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.679562 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerName="extract-content" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.679634 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerName="extract-content" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.679707 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerName="extract-content" Feb 18 12:04:53 crc kubenswrapper[4717]: E0218 12:04:53.679789 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerName="extract-utilities" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.679863 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerName="extract-utilities" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.680199 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="68536fe0-b0a2-45b8-98d0-a387d8756183" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.680307 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c40328-be7c-4470-80fe-6bdf6e254d8f" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.680390 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5d174e-b9b0-46a9-972f-bcdc4ec99462" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.680472 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="e75253d6-ecc5-47ff-be33-9274290c65fb" containerName="registry-server" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.681687 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.687783 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.688070 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zb5n9" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.688472 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k9qjk"] Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.688501 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.692918 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.779277 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6d6kw"] Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.779612 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24rdz\" (UniqueName: \"kubernetes.io/projected/933fff0e-44a0-4c01-9413-fdbee4638f09-kube-api-access-24rdz\") pod \"dnsmasq-dns-675f4bcbfc-k9qjk\" (UID: \"933fff0e-44a0-4c01-9413-fdbee4638f09\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.779777 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933fff0e-44a0-4c01-9413-fdbee4638f09-config\") pod \"dnsmasq-dns-675f4bcbfc-k9qjk\" (UID: \"933fff0e-44a0-4c01-9413-fdbee4638f09\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.781413 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.784783 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6d6kw"] Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.797996 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.880988 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkphx\" (UniqueName: \"kubernetes.io/projected/f28451bf-06ad-402e-8842-5dad8e6d236b-kube-api-access-bkphx\") pod \"dnsmasq-dns-78dd6ddcc-6d6kw\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.881079 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6d6kw\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.881168 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24rdz\" (UniqueName: \"kubernetes.io/projected/933fff0e-44a0-4c01-9413-fdbee4638f09-kube-api-access-24rdz\") pod \"dnsmasq-dns-675f4bcbfc-k9qjk\" (UID: \"933fff0e-44a0-4c01-9413-fdbee4638f09\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.881197 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933fff0e-44a0-4c01-9413-fdbee4638f09-config\") pod \"dnsmasq-dns-675f4bcbfc-k9qjk\" (UID: \"933fff0e-44a0-4c01-9413-fdbee4638f09\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.881275 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-config\") pod \"dnsmasq-dns-78dd6ddcc-6d6kw\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.882206 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933fff0e-44a0-4c01-9413-fdbee4638f09-config\") pod \"dnsmasq-dns-675f4bcbfc-k9qjk\" (UID: \"933fff0e-44a0-4c01-9413-fdbee4638f09\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.916369 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24rdz\" (UniqueName: \"kubernetes.io/projected/933fff0e-44a0-4c01-9413-fdbee4638f09-kube-api-access-24rdz\") pod \"dnsmasq-dns-675f4bcbfc-k9qjk\" (UID: \"933fff0e-44a0-4c01-9413-fdbee4638f09\") " pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.983647 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-config\") pod \"dnsmasq-dns-78dd6ddcc-6d6kw\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.983770 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkphx\" (UniqueName: \"kubernetes.io/projected/f28451bf-06ad-402e-8842-5dad8e6d236b-kube-api-access-bkphx\") pod \"dnsmasq-dns-78dd6ddcc-6d6kw\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.983822 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6d6kw\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.984762 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-config\") pod \"dnsmasq-dns-78dd6ddcc-6d6kw\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:53 crc kubenswrapper[4717]: I0218 12:04:53.984908 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6d6kw\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:54 crc kubenswrapper[4717]: I0218 12:04:54.005224 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkphx\" (UniqueName: \"kubernetes.io/projected/f28451bf-06ad-402e-8842-5dad8e6d236b-kube-api-access-bkphx\") pod \"dnsmasq-dns-78dd6ddcc-6d6kw\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:54 crc kubenswrapper[4717]: I0218 12:04:54.012911 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" Feb 18 12:04:54 crc kubenswrapper[4717]: I0218 12:04:54.136572 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:04:54 crc kubenswrapper[4717]: I0218 12:04:54.462344 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k9qjk"] Feb 18 12:04:54 crc kubenswrapper[4717]: I0218 12:04:54.468086 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:04:54 crc kubenswrapper[4717]: I0218 12:04:54.569532 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6d6kw"] Feb 18 12:04:54 crc kubenswrapper[4717]: W0218 12:04:54.575155 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf28451bf_06ad_402e_8842_5dad8e6d236b.slice/crio-650f2a4d84329b1cdf13bfc082e1d59e37ba0a590d54ef742c7a6a7367bfd585 WatchSource:0}: Error finding container 650f2a4d84329b1cdf13bfc082e1d59e37ba0a590d54ef742c7a6a7367bfd585: Status 404 returned error can't find the container with id 650f2a4d84329b1cdf13bfc082e1d59e37ba0a590d54ef742c7a6a7367bfd585 Feb 18 12:04:55 crc kubenswrapper[4717]: I0218 12:04:55.048951 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" event={"ID":"933fff0e-44a0-4c01-9413-fdbee4638f09","Type":"ContainerStarted","Data":"21c03a4b4205ed4d7b32cce5e0cc818622659846fd3da02d0ae7eba395c528a3"} Feb 18 12:04:55 crc kubenswrapper[4717]: I0218 12:04:55.049001 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" event={"ID":"f28451bf-06ad-402e-8842-5dad8e6d236b","Type":"ContainerStarted","Data":"650f2a4d84329b1cdf13bfc082e1d59e37ba0a590d54ef742c7a6a7367bfd585"} Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.471470 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k9qjk"] Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.495762 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qdxkj"] Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.498598 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.526299 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qdxkj\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.526351 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-config\") pod \"dnsmasq-dns-666b6646f7-qdxkj\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.526398 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhdln\" (UniqueName: \"kubernetes.io/projected/34c74aec-6f0a-4388-b1f8-46574087c035-kube-api-access-lhdln\") pod \"dnsmasq-dns-666b6646f7-qdxkj\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.531161 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qdxkj"] Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.632646 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qdxkj\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.633852 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-config\") pod \"dnsmasq-dns-666b6646f7-qdxkj\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.634003 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhdln\" (UniqueName: \"kubernetes.io/projected/34c74aec-6f0a-4388-b1f8-46574087c035-kube-api-access-lhdln\") pod \"dnsmasq-dns-666b6646f7-qdxkj\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.636251 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qdxkj\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.637596 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-config\") pod \"dnsmasq-dns-666b6646f7-qdxkj\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.675048 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhdln\" (UniqueName: \"kubernetes.io/projected/34c74aec-6f0a-4388-b1f8-46574087c035-kube-api-access-lhdln\") pod \"dnsmasq-dns-666b6646f7-qdxkj\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.830240 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.855320 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6d6kw"] Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.877768 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c29fd"] Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.880781 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.896127 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c29fd"] Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.944597 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c29fd\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.957833 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb2mx\" (UniqueName: \"kubernetes.io/projected/967d5877-0ade-411e-8b7c-16730b8dc3a1-kube-api-access-fb2mx\") pod \"dnsmasq-dns-57d769cc4f-c29fd\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:56 crc kubenswrapper[4717]: I0218 12:04:56.958102 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-config\") pod \"dnsmasq-dns-57d769cc4f-c29fd\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.070970 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c29fd\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.071034 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb2mx\" (UniqueName: \"kubernetes.io/projected/967d5877-0ade-411e-8b7c-16730b8dc3a1-kube-api-access-fb2mx\") pod \"dnsmasq-dns-57d769cc4f-c29fd\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.071111 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-config\") pod \"dnsmasq-dns-57d769cc4f-c29fd\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.073719 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c29fd\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.075400 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-config\") pod \"dnsmasq-dns-57d769cc4f-c29fd\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.104999 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb2mx\" (UniqueName: \"kubernetes.io/projected/967d5877-0ade-411e-8b7c-16730b8dc3a1-kube-api-access-fb2mx\") pod \"dnsmasq-dns-57d769cc4f-c29fd\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.289132 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.354739 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qdxkj"] Feb 18 12:04:57 crc kubenswrapper[4717]: W0218 12:04:57.370600 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34c74aec_6f0a_4388_b1f8_46574087c035.slice/crio-62c447fd45b0994cdf0c232fdb4468927a8a24e0eadd4d33a52360dba29d1802 WatchSource:0}: Error finding container 62c447fd45b0994cdf0c232fdb4468927a8a24e0eadd4d33a52360dba29d1802: Status 404 returned error can't find the container with id 62c447fd45b0994cdf0c232fdb4468927a8a24e0eadd4d33a52360dba29d1802 Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.667583 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.670137 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.672970 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.673465 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fn8c9" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.673670 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.673800 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.673966 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.674172 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.674357 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.688743 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.688789 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/636b0761-84e8-4d2f-88f4-4845e2a05f80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.688819 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nlx7\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-kube-api-access-6nlx7\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.688865 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.688891 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.688918 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.688944 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.688973 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.689016 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/636b0761-84e8-4d2f-88f4-4845e2a05f80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.689039 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-config-data\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.689075 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.689109 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.749473 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c29fd"] Feb 18 12:04:57 crc kubenswrapper[4717]: W0218 12:04:57.792425 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod967d5877_0ade_411e_8b7c_16730b8dc3a1.slice/crio-22130b8499c251b79c3541000c41abc11ffb7d494cda00096068f44bcca27492 WatchSource:0}: Error finding container 22130b8499c251b79c3541000c41abc11ffb7d494cda00096068f44bcca27492: Status 404 returned error can't find the container with id 22130b8499c251b79c3541000c41abc11ffb7d494cda00096068f44bcca27492 Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.792474 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.792728 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.792756 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/636b0761-84e8-4d2f-88f4-4845e2a05f80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.792819 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nlx7\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-kube-api-access-6nlx7\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.792896 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.793383 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.793878 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.793927 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.793964 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.793983 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.794015 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.794103 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/636b0761-84e8-4d2f-88f4-4845e2a05f80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.794127 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-config-data\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.794938 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-config-data\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.796808 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.797100 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.801583 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-server-conf\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.807880 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.810578 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.817029 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/636b0761-84e8-4d2f-88f4-4845e2a05f80-pod-info\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.818642 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nlx7\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-kube-api-access-6nlx7\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.823325 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/636b0761-84e8-4d2f-88f4-4845e2a05f80-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:57 crc kubenswrapper[4717]: I0218 12:04:57.835324 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " pod="openstack/rabbitmq-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.023474 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.024804 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.026861 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.039169 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.039446 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dj5rd" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.039692 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.040245 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.040574 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.040787 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.040803 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.047826 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.110687 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" event={"ID":"967d5877-0ade-411e-8b7c-16730b8dc3a1","Type":"ContainerStarted","Data":"22130b8499c251b79c3541000c41abc11ffb7d494cda00096068f44bcca27492"} Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.126178 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" event={"ID":"34c74aec-6f0a-4388-b1f8-46574087c035","Type":"ContainerStarted","Data":"62c447fd45b0994cdf0c232fdb4468927a8a24e0eadd4d33a52360dba29d1802"} Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200331 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200431 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200461 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/468aa28e-8245-4024-815a-24d469dc17bf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200492 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/468aa28e-8245-4024-815a-24d469dc17bf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200552 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200591 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9hnw\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-kube-api-access-r9hnw\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200674 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200732 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200769 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200797 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.200844 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.302205 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.302745 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.302803 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.302825 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/468aa28e-8245-4024-815a-24d469dc17bf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.302849 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/468aa28e-8245-4024-815a-24d469dc17bf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.302898 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.302930 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9hnw\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-kube-api-access-r9hnw\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.302955 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.302986 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.303018 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.303036 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.306367 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.319368 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.319563 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.319783 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.319817 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.320573 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.329451 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.332776 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.333722 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/468aa28e-8245-4024-815a-24d469dc17bf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.337243 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/468aa28e-8245-4024-815a-24d469dc17bf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.344497 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9hnw\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-kube-api-access-r9hnw\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.354233 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.407664 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.628942 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:04:58 crc kubenswrapper[4717]: W0218 12:04:58.650279 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod636b0761_84e8_4d2f_88f4_4845e2a05f80.slice/crio-0c2cd2d34fd880c3feaf3904e76bb042509613d2b6030490ce2a5a50a89967f4 WatchSource:0}: Error finding container 0c2cd2d34fd880c3feaf3904e76bb042509613d2b6030490ce2a5a50a89967f4: Status 404 returned error can't find the container with id 0c2cd2d34fd880c3feaf3904e76bb042509613d2b6030490ce2a5a50a89967f4 Feb 18 12:04:58 crc kubenswrapper[4717]: I0218 12:04:58.970143 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:04:58 crc kubenswrapper[4717]: W0218 12:04:58.990641 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod468aa28e_8245_4024_815a_24d469dc17bf.slice/crio-0f4ccb89f9f252e31de5481ad0e9ab8beda8ddb021193db7cce6b8a58bd4cb33 WatchSource:0}: Error finding container 0f4ccb89f9f252e31de5481ad0e9ab8beda8ddb021193db7cce6b8a58bd4cb33: Status 404 returned error can't find the container with id 0f4ccb89f9f252e31de5481ad0e9ab8beda8ddb021193db7cce6b8a58bd4cb33 Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.127926 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.139447 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.163019 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"636b0761-84e8-4d2f-88f4-4845e2a05f80","Type":"ContainerStarted","Data":"0c2cd2d34fd880c3feaf3904e76bb042509613d2b6030490ce2a5a50a89967f4"} Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.175374 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.176588 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.176606 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.176858 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-kpkmf" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.177656 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.182070 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.189087 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"468aa28e-8245-4024-815a-24d469dc17bf","Type":"ContainerStarted","Data":"0f4ccb89f9f252e31de5481ad0e9ab8beda8ddb021193db7cce6b8a58bd4cb33"} Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.227729 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gltzz\" (UniqueName: \"kubernetes.io/projected/9e12df56-53ef-42bc-9f15-2c7a89b391d1-kube-api-access-gltzz\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.227889 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e12df56-53ef-42bc-9f15-2c7a89b391d1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.228019 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.228057 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9e12df56-53ef-42bc-9f15-2c7a89b391d1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.228112 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e12df56-53ef-42bc-9f15-2c7a89b391d1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.228182 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e12df56-53ef-42bc-9f15-2c7a89b391d1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.228216 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e12df56-53ef-42bc-9f15-2c7a89b391d1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.228274 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9e12df56-53ef-42bc-9f15-2c7a89b391d1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.330045 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e12df56-53ef-42bc-9f15-2c7a89b391d1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.330162 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.330206 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9e12df56-53ef-42bc-9f15-2c7a89b391d1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.330236 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e12df56-53ef-42bc-9f15-2c7a89b391d1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.330298 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e12df56-53ef-42bc-9f15-2c7a89b391d1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.330334 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e12df56-53ef-42bc-9f15-2c7a89b391d1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.330370 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9e12df56-53ef-42bc-9f15-2c7a89b391d1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.330424 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gltzz\" (UniqueName: \"kubernetes.io/projected/9e12df56-53ef-42bc-9f15-2c7a89b391d1-kube-api-access-gltzz\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.330721 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.330930 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9e12df56-53ef-42bc-9f15-2c7a89b391d1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.331346 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e12df56-53ef-42bc-9f15-2c7a89b391d1-kolla-config\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.334979 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9e12df56-53ef-42bc-9f15-2c7a89b391d1-config-data-default\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.337625 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e12df56-53ef-42bc-9f15-2c7a89b391d1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.345045 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e12df56-53ef-42bc-9f15-2c7a89b391d1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.364093 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e12df56-53ef-42bc-9f15-2c7a89b391d1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.377074 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gltzz\" (UniqueName: \"kubernetes.io/projected/9e12df56-53ef-42bc-9f15-2c7a89b391d1-kube-api-access-gltzz\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.392534 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"9e12df56-53ef-42bc-9f15-2c7a89b391d1\") " pod="openstack/openstack-galera-0" Feb 18 12:04:59 crc kubenswrapper[4717]: I0218 12:04:59.491960 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.134079 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.660110 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.661977 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.670838 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6lpvr" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.671176 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.671472 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.671786 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.683141 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.711985 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.713344 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.715656 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.715772 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-66pnt" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.715968 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.723317 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.778836 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6ba708c6-57e5-4406-8773-2a700b0be0fc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.778896 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ba708c6-57e5-4406-8773-2a700b0be0fc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.778938 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6ba708c6-57e5-4406-8773-2a700b0be0fc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.778968 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6ba708c6-57e5-4406-8773-2a700b0be0fc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.778995 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-config-data\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.779018 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba708c6-57e5-4406-8773-2a700b0be0fc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.779042 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.779063 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mslrv\" (UniqueName: \"kubernetes.io/projected/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-kube-api-access-mslrv\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.779080 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.779125 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-kolla-config\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.779141 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba708c6-57e5-4406-8773-2a700b0be0fc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.779163 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrn6w\" (UniqueName: \"kubernetes.io/projected/6ba708c6-57e5-4406-8773-2a700b0be0fc-kube-api-access-wrn6w\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.779190 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.880774 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-config-data\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.880836 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6ba708c6-57e5-4406-8773-2a700b0be0fc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.880869 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6ba708c6-57e5-4406-8773-2a700b0be0fc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.880912 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba708c6-57e5-4406-8773-2a700b0be0fc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.880946 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.880978 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mslrv\" (UniqueName: \"kubernetes.io/projected/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-kube-api-access-mslrv\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.881002 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.881026 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-kolla-config\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.881052 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba708c6-57e5-4406-8773-2a700b0be0fc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.881084 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrn6w\" (UniqueName: \"kubernetes.io/projected/6ba708c6-57e5-4406-8773-2a700b0be0fc-kube-api-access-wrn6w\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.881123 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.881216 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6ba708c6-57e5-4406-8773-2a700b0be0fc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.881240 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ba708c6-57e5-4406-8773-2a700b0be0fc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.883136 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ba708c6-57e5-4406-8773-2a700b0be0fc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.883866 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-config-data\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.884424 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6ba708c6-57e5-4406-8773-2a700b0be0fc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.884721 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6ba708c6-57e5-4406-8773-2a700b0be0fc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.886427 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-kolla-config\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.886628 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.887750 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6ba708c6-57e5-4406-8773-2a700b0be0fc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.894851 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba708c6-57e5-4406-8773-2a700b0be0fc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.894934 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.898997 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.899133 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba708c6-57e5-4406-8773-2a700b0be0fc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.907732 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrn6w\" (UniqueName: \"kubernetes.io/projected/6ba708c6-57e5-4406-8773-2a700b0be0fc-kube-api-access-wrn6w\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.910922 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mslrv\" (UniqueName: \"kubernetes.io/projected/760991b3-fcd6-4ea6-bc3b-3fad54f0c70c-kube-api-access-mslrv\") pod \"memcached-0\" (UID: \"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c\") " pod="openstack/memcached-0" Feb 18 12:05:00 crc kubenswrapper[4717]: I0218 12:05:00.920601 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"6ba708c6-57e5-4406-8773-2a700b0be0fc\") " pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:01 crc kubenswrapper[4717]: I0218 12:05:01.001185 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:01 crc kubenswrapper[4717]: I0218 12:05:01.040085 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 12:05:03 crc kubenswrapper[4717]: I0218 12:05:03.249856 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 12:05:03 crc kubenswrapper[4717]: I0218 12:05:03.254748 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 12:05:03 crc kubenswrapper[4717]: I0218 12:05:03.264746 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 12:05:03 crc kubenswrapper[4717]: I0218 12:05:03.265047 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sh5lx" Feb 18 12:05:03 crc kubenswrapper[4717]: I0218 12:05:03.334824 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdj56\" (UniqueName: \"kubernetes.io/projected/4aa4314c-f7fd-4bad-909b-40cd26c1a377-kube-api-access-tdj56\") pod \"kube-state-metrics-0\" (UID: \"4aa4314c-f7fd-4bad-909b-40cd26c1a377\") " pod="openstack/kube-state-metrics-0" Feb 18 12:05:03 crc kubenswrapper[4717]: I0218 12:05:03.437718 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdj56\" (UniqueName: \"kubernetes.io/projected/4aa4314c-f7fd-4bad-909b-40cd26c1a377-kube-api-access-tdj56\") pod \"kube-state-metrics-0\" (UID: \"4aa4314c-f7fd-4bad-909b-40cd26c1a377\") " pod="openstack/kube-state-metrics-0" Feb 18 12:05:03 crc kubenswrapper[4717]: I0218 12:05:03.462452 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdj56\" (UniqueName: \"kubernetes.io/projected/4aa4314c-f7fd-4bad-909b-40cd26c1a377-kube-api-access-tdj56\") pod \"kube-state-metrics-0\" (UID: \"4aa4314c-f7fd-4bad-909b-40cd26c1a377\") " pod="openstack/kube-state-metrics-0" Feb 18 12:05:03 crc kubenswrapper[4717]: I0218 12:05:03.590614 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 12:05:05 crc kubenswrapper[4717]: I0218 12:05:05.288855 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9e12df56-53ef-42bc-9f15-2c7a89b391d1","Type":"ContainerStarted","Data":"d72afc186e4464acf43ee89df626ccd62299eded2a24023407136ac5564c4bda"} Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.527751 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cqvjv"] Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.532302 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.537686 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.537910 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.538019 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zgww5" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.543523 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cqvjv"] Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.605359 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zzbgg"] Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.607872 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.617804 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e3a25d1-3ad3-4ecb-bca6-84643516d734-ovn-controller-tls-certs\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.617863 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3a25d1-3ad3-4ecb-bca6-84643516d734-combined-ca-bundle\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.617914 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e3a25d1-3ad3-4ecb-bca6-84643516d734-scripts\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.618075 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e3a25d1-3ad3-4ecb-bca6-84643516d734-var-run-ovn\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.618104 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e3a25d1-3ad3-4ecb-bca6-84643516d734-var-log-ovn\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.618132 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4sxc\" (UniqueName: \"kubernetes.io/projected/6e3a25d1-3ad3-4ecb-bca6-84643516d734-kube-api-access-n4sxc\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.618184 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e3a25d1-3ad3-4ecb-bca6-84643516d734-var-run\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.639911 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zzbgg"] Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719143 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e3a25d1-3ad3-4ecb-bca6-84643516d734-var-run\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719213 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e3a25d1-3ad3-4ecb-bca6-84643516d734-ovn-controller-tls-certs\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719240 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3a25d1-3ad3-4ecb-bca6-84643516d734-combined-ca-bundle\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719277 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvb6f\" (UniqueName: \"kubernetes.io/projected/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-kube-api-access-vvb6f\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719302 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e3a25d1-3ad3-4ecb-bca6-84643516d734-scripts\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719343 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-var-log\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719364 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-var-run\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719396 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-scripts\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719443 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e3a25d1-3ad3-4ecb-bca6-84643516d734-var-run-ovn\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719485 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e3a25d1-3ad3-4ecb-bca6-84643516d734-var-log-ovn\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.719508 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4sxc\" (UniqueName: \"kubernetes.io/projected/6e3a25d1-3ad3-4ecb-bca6-84643516d734-kube-api-access-n4sxc\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.720535 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6e3a25d1-3ad3-4ecb-bca6-84643516d734-var-run\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.722373 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-etc-ovs\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.722528 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-var-lib\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.723044 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e3a25d1-3ad3-4ecb-bca6-84643516d734-var-run-ovn\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.723200 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6e3a25d1-3ad3-4ecb-bca6-84643516d734-var-log-ovn\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.725192 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e3a25d1-3ad3-4ecb-bca6-84643516d734-scripts\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.729795 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e3a25d1-3ad3-4ecb-bca6-84643516d734-ovn-controller-tls-certs\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.742747 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4sxc\" (UniqueName: \"kubernetes.io/projected/6e3a25d1-3ad3-4ecb-bca6-84643516d734-kube-api-access-n4sxc\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.743061 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3a25d1-3ad3-4ecb-bca6-84643516d734-combined-ca-bundle\") pod \"ovn-controller-cqvjv\" (UID: \"6e3a25d1-3ad3-4ecb-bca6-84643516d734\") " pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.824298 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-var-log\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.824778 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-var-run\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.824800 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-scripts\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.824878 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-etc-ovs\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.824919 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-var-lib\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.824961 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvb6f\" (UniqueName: \"kubernetes.io/projected/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-kube-api-access-vvb6f\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.825566 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-var-log\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.825627 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-var-run\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.828080 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-etc-ovs\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.828116 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-scripts\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.828315 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-var-lib\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.871062 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.879139 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvb6f\" (UniqueName: \"kubernetes.io/projected/d145d1aa-1d6c-4285-9670-42d3bb4ea1cd-kube-api-access-vvb6f\") pod \"ovn-controller-ovs-zzbgg\" (UID: \"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd\") " pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:06 crc kubenswrapper[4717]: I0218 12:05:06.941511 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.417653 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.421285 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.425192 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.425830 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.426314 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.426441 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.428270 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tch4w" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.432011 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.543375 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/77683781-6580-4589-8869-bbaea0d6d8a0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.543440 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77683781-6580-4589-8869-bbaea0d6d8a0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.543791 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.543856 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77683781-6580-4589-8869-bbaea0d6d8a0-config\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.543951 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77683781-6580-4589-8869-bbaea0d6d8a0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.544241 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77683781-6580-4589-8869-bbaea0d6d8a0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.544476 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnr6c\" (UniqueName: \"kubernetes.io/projected/77683781-6580-4589-8869-bbaea0d6d8a0-kube-api-access-pnr6c\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.544535 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77683781-6580-4589-8869-bbaea0d6d8a0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.646412 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnr6c\" (UniqueName: \"kubernetes.io/projected/77683781-6580-4589-8869-bbaea0d6d8a0-kube-api-access-pnr6c\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.646492 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77683781-6580-4589-8869-bbaea0d6d8a0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.646528 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/77683781-6580-4589-8869-bbaea0d6d8a0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.646567 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77683781-6580-4589-8869-bbaea0d6d8a0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.646626 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.646649 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77683781-6580-4589-8869-bbaea0d6d8a0-config\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.646684 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77683781-6580-4589-8869-bbaea0d6d8a0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.646714 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77683781-6580-4589-8869-bbaea0d6d8a0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.647207 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.647905 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/77683781-6580-4589-8869-bbaea0d6d8a0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.648859 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77683781-6580-4589-8869-bbaea0d6d8a0-config\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.648910 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77683781-6580-4589-8869-bbaea0d6d8a0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.653133 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77683781-6580-4589-8869-bbaea0d6d8a0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.653203 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77683781-6580-4589-8869-bbaea0d6d8a0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.659075 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77683781-6580-4589-8869-bbaea0d6d8a0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.668138 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnr6c\" (UniqueName: \"kubernetes.io/projected/77683781-6580-4589-8869-bbaea0d6d8a0-kube-api-access-pnr6c\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.678366 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"77683781-6580-4589-8869-bbaea0d6d8a0\") " pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:07 crc kubenswrapper[4717]: I0218 12:05:07.761889 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.716639 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.718675 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.726337 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bjtgj" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.726660 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.726887 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.727057 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.733978 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.790028 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41966c80-d352-4f94-b011-1ef922e3250f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.790105 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.790136 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/41966c80-d352-4f94-b011-1ef922e3250f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.790167 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41966c80-d352-4f94-b011-1ef922e3250f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.790202 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41966c80-d352-4f94-b011-1ef922e3250f-config\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.790275 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z2n9\" (UniqueName: \"kubernetes.io/projected/41966c80-d352-4f94-b011-1ef922e3250f-kube-api-access-9z2n9\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.790319 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41966c80-d352-4f94-b011-1ef922e3250f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.790345 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/41966c80-d352-4f94-b011-1ef922e3250f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.892069 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41966c80-d352-4f94-b011-1ef922e3250f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.892172 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.892221 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/41966c80-d352-4f94-b011-1ef922e3250f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.892254 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41966c80-d352-4f94-b011-1ef922e3250f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.892360 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41966c80-d352-4f94-b011-1ef922e3250f-config\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.892537 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z2n9\" (UniqueName: \"kubernetes.io/projected/41966c80-d352-4f94-b011-1ef922e3250f-kube-api-access-9z2n9\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.892601 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41966c80-d352-4f94-b011-1ef922e3250f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.892628 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/41966c80-d352-4f94-b011-1ef922e3250f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.892839 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.893814 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/41966c80-d352-4f94-b011-1ef922e3250f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.894517 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41966c80-d352-4f94-b011-1ef922e3250f-config\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.895735 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/41966c80-d352-4f94-b011-1ef922e3250f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.900574 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41966c80-d352-4f94-b011-1ef922e3250f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.901965 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41966c80-d352-4f94-b011-1ef922e3250f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.906546 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/41966c80-d352-4f94-b011-1ef922e3250f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.916297 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:09 crc kubenswrapper[4717]: I0218 12:05:09.918639 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z2n9\" (UniqueName: \"kubernetes.io/projected/41966c80-d352-4f94-b011-1ef922e3250f-kube-api-access-9z2n9\") pod \"ovsdbserver-sb-0\" (UID: \"41966c80-d352-4f94-b011-1ef922e3250f\") " pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:10 crc kubenswrapper[4717]: I0218 12:05:10.054798 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:21 crc kubenswrapper[4717]: E0218 12:05:21.520036 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 12:05:21 crc kubenswrapper[4717]: E0218 12:05:21.521100 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9hnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(468aa28e-8245-4024-815a-24d469dc17bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:05:21 crc kubenswrapper[4717]: E0218 12:05:21.524153 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="468aa28e-8245-4024-815a-24d469dc17bf" Feb 18 12:05:21 crc kubenswrapper[4717]: E0218 12:05:21.551577 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 12:05:21 crc kubenswrapper[4717]: E0218 12:05:21.552028 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nlx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(636b0761-84e8-4d2f-88f4-4845e2a05f80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:05:21 crc kubenswrapper[4717]: E0218 12:05:21.553237 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="636b0761-84e8-4d2f-88f4-4845e2a05f80" Feb 18 12:05:22 crc kubenswrapper[4717]: E0218 12:05:22.393294 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 12:05:22 crc kubenswrapper[4717]: E0218 12:05:22.394527 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24rdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-k9qjk_openstack(933fff0e-44a0-4c01-9413-fdbee4638f09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:05:22 crc kubenswrapper[4717]: E0218 12:05:22.396040 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" podUID="933fff0e-44a0-4c01-9413-fdbee4638f09" Feb 18 12:05:22 crc kubenswrapper[4717]: E0218 12:05:22.416721 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 12:05:22 crc kubenswrapper[4717]: E0218 12:05:22.416966 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhdln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-qdxkj_openstack(34c74aec-6f0a-4388-b1f8-46574087c035): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:05:22 crc kubenswrapper[4717]: E0218 12:05:22.418194 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" podUID="34c74aec-6f0a-4388-b1f8-46574087c035" Feb 18 12:05:22 crc kubenswrapper[4717]: E0218 12:05:22.436754 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="636b0761-84e8-4d2f-88f4-4845e2a05f80" Feb 18 12:05:22 crc kubenswrapper[4717]: E0218 12:05:22.437285 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" podUID="34c74aec-6f0a-4388-b1f8-46574087c035" Feb 18 12:05:22 crc kubenswrapper[4717]: E0218 12:05:22.439468 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="468aa28e-8245-4024-815a-24d469dc17bf" Feb 18 12:05:24 crc kubenswrapper[4717]: E0218 12:05:24.700240 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 18 12:05:24 crc kubenswrapper[4717]: E0218 12:05:24.700767 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gltzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(9e12df56-53ef-42bc-9f15-2c7a89b391d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:05:24 crc kubenswrapper[4717]: E0218 12:05:24.702084 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="9e12df56-53ef-42bc-9f15-2c7a89b391d1" Feb 18 12:05:24 crc kubenswrapper[4717]: E0218 12:05:24.750654 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 12:05:24 crc kubenswrapper[4717]: E0218 12:05:24.750947 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkphx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-6d6kw_openstack(f28451bf-06ad-402e-8842-5dad8e6d236b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:05:24 crc kubenswrapper[4717]: E0218 12:05:24.752325 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" podUID="f28451bf-06ad-402e-8842-5dad8e6d236b" Feb 18 12:05:24 crc kubenswrapper[4717]: E0218 12:05:24.777038 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 12:05:24 crc kubenswrapper[4717]: E0218 12:05:24.777295 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fb2mx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-c29fd_openstack(967d5877-0ade-411e-8b7c-16730b8dc3a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:05:24 crc kubenswrapper[4717]: E0218 12:05:24.779542 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" podUID="967d5877-0ade-411e-8b7c-16730b8dc3a1" Feb 18 12:05:24 crc kubenswrapper[4717]: I0218 12:05:24.824750 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" Feb 18 12:05:24 crc kubenswrapper[4717]: I0218 12:05:24.975761 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933fff0e-44a0-4c01-9413-fdbee4638f09-config\") pod \"933fff0e-44a0-4c01-9413-fdbee4638f09\" (UID: \"933fff0e-44a0-4c01-9413-fdbee4638f09\") " Feb 18 12:05:24 crc kubenswrapper[4717]: I0218 12:05:24.976346 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24rdz\" (UniqueName: \"kubernetes.io/projected/933fff0e-44a0-4c01-9413-fdbee4638f09-kube-api-access-24rdz\") pod \"933fff0e-44a0-4c01-9413-fdbee4638f09\" (UID: \"933fff0e-44a0-4c01-9413-fdbee4638f09\") " Feb 18 12:05:24 crc kubenswrapper[4717]: I0218 12:05:24.976702 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933fff0e-44a0-4c01-9413-fdbee4638f09-config" (OuterVolumeSpecName: "config") pod "933fff0e-44a0-4c01-9413-fdbee4638f09" (UID: "933fff0e-44a0-4c01-9413-fdbee4638f09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:24 crc kubenswrapper[4717]: I0218 12:05:24.977420 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933fff0e-44a0-4c01-9413-fdbee4638f09-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:24 crc kubenswrapper[4717]: I0218 12:05:24.983648 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933fff0e-44a0-4c01-9413-fdbee4638f09-kube-api-access-24rdz" (OuterVolumeSpecName: "kube-api-access-24rdz") pod "933fff0e-44a0-4c01-9413-fdbee4638f09" (UID: "933fff0e-44a0-4c01-9413-fdbee4638f09"). InnerVolumeSpecName "kube-api-access-24rdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.079390 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24rdz\" (UniqueName: \"kubernetes.io/projected/933fff0e-44a0-4c01-9413-fdbee4638f09-kube-api-access-24rdz\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.398219 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.422308 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 12:05:25 crc kubenswrapper[4717]: W0218 12:05:25.426194 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ba708c6_57e5_4406_8773_2a700b0be0fc.slice/crio-11626fcfe24b69bc20dc2027c8369b601c04d4a4528105ea4f3660e9e15c527e WatchSource:0}: Error finding container 11626fcfe24b69bc20dc2027c8369b601c04d4a4528105ea4f3660e9e15c527e: Status 404 returned error can't find the container with id 11626fcfe24b69bc20dc2027c8369b601c04d4a4528105ea4f3660e9e15c527e Feb 18 12:05:25 crc kubenswrapper[4717]: W0218 12:05:25.426696 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod760991b3_fcd6_4ea6_bc3b_3fad54f0c70c.slice/crio-9294ac33556fc5b9d7fce7ec628271b05fbd053746f5fa270e4d4e8df1f5851c WatchSource:0}: Error finding container 9294ac33556fc5b9d7fce7ec628271b05fbd053746f5fa270e4d4e8df1f5851c: Status 404 returned error can't find the container with id 9294ac33556fc5b9d7fce7ec628271b05fbd053746f5fa270e4d4e8df1f5851c Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.431180 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cqvjv"] Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.481647 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c","Type":"ContainerStarted","Data":"9294ac33556fc5b9d7fce7ec628271b05fbd053746f5fa270e4d4e8df1f5851c"} Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.488229 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6ba708c6-57e5-4406-8773-2a700b0be0fc","Type":"ContainerStarted","Data":"11626fcfe24b69bc20dc2027c8369b601c04d4a4528105ea4f3660e9e15c527e"} Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.494459 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" event={"ID":"933fff0e-44a0-4c01-9413-fdbee4638f09","Type":"ContainerDied","Data":"21c03a4b4205ed4d7b32cce5e0cc818622659846fd3da02d0ae7eba395c528a3"} Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.494565 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-k9qjk" Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.498108 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cqvjv" event={"ID":"6e3a25d1-3ad3-4ecb-bca6-84643516d734","Type":"ContainerStarted","Data":"bfb06d53b96328f373b2a5b7020fca1a565ef99d7bb605f50f88f7ca8969d4c8"} Feb 18 12:05:25 crc kubenswrapper[4717]: E0218 12:05:25.502562 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="9e12df56-53ef-42bc-9f15-2c7a89b391d1" Feb 18 12:05:25 crc kubenswrapper[4717]: E0218 12:05:25.502842 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" podUID="967d5877-0ade-411e-8b7c-16730b8dc3a1" Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.581557 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.669220 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k9qjk"] Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.683657 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.725465 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-k9qjk"] Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.756709 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 12:05:25 crc kubenswrapper[4717]: I0218 12:05:25.858021 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zzbgg"] Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.064092 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.223068 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-config\") pod \"f28451bf-06ad-402e-8842-5dad8e6d236b\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.223759 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkphx\" (UniqueName: \"kubernetes.io/projected/f28451bf-06ad-402e-8842-5dad8e6d236b-kube-api-access-bkphx\") pod \"f28451bf-06ad-402e-8842-5dad8e6d236b\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.223828 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-dns-svc\") pod \"f28451bf-06ad-402e-8842-5dad8e6d236b\" (UID: \"f28451bf-06ad-402e-8842-5dad8e6d236b\") " Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.223860 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-config" (OuterVolumeSpecName: "config") pod "f28451bf-06ad-402e-8842-5dad8e6d236b" (UID: "f28451bf-06ad-402e-8842-5dad8e6d236b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.224241 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f28451bf-06ad-402e-8842-5dad8e6d236b" (UID: "f28451bf-06ad-402e-8842-5dad8e6d236b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.224289 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.228062 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f28451bf-06ad-402e-8842-5dad8e6d236b-kube-api-access-bkphx" (OuterVolumeSpecName: "kube-api-access-bkphx") pod "f28451bf-06ad-402e-8842-5dad8e6d236b" (UID: "f28451bf-06ad-402e-8842-5dad8e6d236b"). InnerVolumeSpecName "kube-api-access-bkphx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.326547 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkphx\" (UniqueName: \"kubernetes.io/projected/f28451bf-06ad-402e-8842-5dad8e6d236b-kube-api-access-bkphx\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.326599 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f28451bf-06ad-402e-8842-5dad8e6d236b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.509164 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zzbgg" event={"ID":"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd","Type":"ContainerStarted","Data":"82de55c6e980ca99e2d43331730ea590f8c2dfef2c362e0cb1d9e6c57b217a52"} Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.511913 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6ba708c6-57e5-4406-8773-2a700b0be0fc","Type":"ContainerStarted","Data":"10acaf0a3216b6e3fb4d5449b00c2ca5d0042e33863b6d4980b779952da632c1"} Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.515380 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4aa4314c-f7fd-4bad-909b-40cd26c1a377","Type":"ContainerStarted","Data":"c70cf53d59dfe14d2a71c6ec916c0864986647bcdd474ef38f9fbcd582cf6eb4"} Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.518896 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"77683781-6580-4589-8869-bbaea0d6d8a0","Type":"ContainerStarted","Data":"6fc3e25ea6726baff5d9955111bdbd5f6bfa880c6a8dd160a2ecc043368d4429"} Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.520356 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" event={"ID":"f28451bf-06ad-402e-8842-5dad8e6d236b","Type":"ContainerDied","Data":"650f2a4d84329b1cdf13bfc082e1d59e37ba0a590d54ef742c7a6a7367bfd585"} Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.520390 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6d6kw" Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.530891 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"41966c80-d352-4f94-b011-1ef922e3250f","Type":"ContainerStarted","Data":"eede14f4e79c125aa9b60bd502d80fc1ef1c11f0a092d5b633d8d7b48c148fd4"} Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.630136 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6d6kw"] Feb 18 12:05:26 crc kubenswrapper[4717]: I0218 12:05:26.686437 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6d6kw"] Feb 18 12:05:27 crc kubenswrapper[4717]: I0218 12:05:27.072251 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933fff0e-44a0-4c01-9413-fdbee4638f09" path="/var/lib/kubelet/pods/933fff0e-44a0-4c01-9413-fdbee4638f09/volumes" Feb 18 12:05:27 crc kubenswrapper[4717]: I0218 12:05:27.075933 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f28451bf-06ad-402e-8842-5dad8e6d236b" path="/var/lib/kubelet/pods/f28451bf-06ad-402e-8842-5dad8e6d236b/volumes" Feb 18 12:05:30 crc kubenswrapper[4717]: I0218 12:05:30.577627 4717 generic.go:334] "Generic (PLEG): container finished" podID="6ba708c6-57e5-4406-8773-2a700b0be0fc" containerID="10acaf0a3216b6e3fb4d5449b00c2ca5d0042e33863b6d4980b779952da632c1" exitCode=0 Feb 18 12:05:30 crc kubenswrapper[4717]: I0218 12:05:30.578389 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6ba708c6-57e5-4406-8773-2a700b0be0fc","Type":"ContainerDied","Data":"10acaf0a3216b6e3fb4d5449b00c2ca5d0042e33863b6d4980b779952da632c1"} Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.643021 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"760991b3-fcd6-4ea6-bc3b-3fad54f0c70c","Type":"ContainerStarted","Data":"dcbda8774cd28a91d353c10654b0b49f57dac2aab378528ce7e96b0c88161e5c"} Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.643967 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.645521 4717 generic.go:334] "Generic (PLEG): container finished" podID="d145d1aa-1d6c-4285-9670-42d3bb4ea1cd" containerID="0712ebb559be966fab645d548c115535c2b39d65f3eacfbe7e8afcd8b359135a" exitCode=0 Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.646116 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zzbgg" event={"ID":"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd","Type":"ContainerDied","Data":"0712ebb559be966fab645d548c115535c2b39d65f3eacfbe7e8afcd8b359135a"} Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.648703 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6ba708c6-57e5-4406-8773-2a700b0be0fc","Type":"ContainerStarted","Data":"758993635f970b3a10f128d1ca67e84c305d5172b12612fcd33324026b29ddc1"} Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.651287 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cqvjv" event={"ID":"6e3a25d1-3ad3-4ecb-bca6-84643516d734","Type":"ContainerStarted","Data":"5359cf495e379ca062c076a897a9e2629b504d3f3d45c2209cecc91dcdceed38"} Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.651875 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-cqvjv" Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.658191 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4aa4314c-f7fd-4bad-909b-40cd26c1a377","Type":"ContainerStarted","Data":"8e848b6e53f0edfc455ea0d32f2bc5665ee1f6ba97202a4edfd327911137410a"} Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.658343 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.666741 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"77683781-6580-4589-8869-bbaea0d6d8a0","Type":"ContainerStarted","Data":"243f4d53af5def76cad4c8bb1637faa36796cff047d47368c30405f1c03c2f15"} Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.668749 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"41966c80-d352-4f94-b011-1ef922e3250f","Type":"ContainerStarted","Data":"6a405e021696c8f82cfce0539707ea43cb6467028a04b217caa2dfd35ffe745f"} Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.672844 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.962028374 podStartE2EDuration="33.672813224s" podCreationTimestamp="2026-02-18 12:05:00 +0000 UTC" firstStartedPulling="2026-02-18 12:05:25.447405683 +0000 UTC m=+959.849506999" lastFinishedPulling="2026-02-18 12:05:31.158190533 +0000 UTC m=+965.560291849" observedRunningTime="2026-02-18 12:05:33.663519845 +0000 UTC m=+968.065621171" watchObservedRunningTime="2026-02-18 12:05:33.672813224 +0000 UTC m=+968.074914540" Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.687675 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=23.88163106 podStartE2EDuration="30.687646527s" podCreationTimestamp="2026-02-18 12:05:03 +0000 UTC" firstStartedPulling="2026-02-18 12:05:25.743585482 +0000 UTC m=+960.145686788" lastFinishedPulling="2026-02-18 12:05:32.549600939 +0000 UTC m=+966.951702255" observedRunningTime="2026-02-18 12:05:33.684094598 +0000 UTC m=+968.086195914" watchObservedRunningTime="2026-02-18 12:05:33.687646527 +0000 UTC m=+968.089747843" Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.772950 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cqvjv" podStartSLOduration=21.499696624 podStartE2EDuration="27.772920379s" podCreationTimestamp="2026-02-18 12:05:06 +0000 UTC" firstStartedPulling="2026-02-18 12:05:25.449783759 +0000 UTC m=+959.851885075" lastFinishedPulling="2026-02-18 12:05:31.723007514 +0000 UTC m=+966.125108830" observedRunningTime="2026-02-18 12:05:33.737804562 +0000 UTC m=+968.139905878" watchObservedRunningTime="2026-02-18 12:05:33.772920379 +0000 UTC m=+968.175021695" Feb 18 12:05:33 crc kubenswrapper[4717]: I0218 12:05:33.780813 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=34.330431569 podStartE2EDuration="34.780779297s" podCreationTimestamp="2026-02-18 12:04:59 +0000 UTC" firstStartedPulling="2026-02-18 12:05:25.444342277 +0000 UTC m=+959.846443593" lastFinishedPulling="2026-02-18 12:05:25.894690005 +0000 UTC m=+960.296791321" observedRunningTime="2026-02-18 12:05:33.771576591 +0000 UTC m=+968.173677917" watchObservedRunningTime="2026-02-18 12:05:33.780779297 +0000 UTC m=+968.182880613" Feb 18 12:05:34 crc kubenswrapper[4717]: I0218 12:05:34.689721 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zzbgg" event={"ID":"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd","Type":"ContainerStarted","Data":"fa6f2394ad10f856f165f6b18344f2451e7a7785336adaef8f0d8f9f3678edec"} Feb 18 12:05:34 crc kubenswrapper[4717]: I0218 12:05:34.693137 4717 generic.go:334] "Generic (PLEG): container finished" podID="34c74aec-6f0a-4388-b1f8-46574087c035" containerID="3f073da1afc165273f734b8dcf207477d269b699a4e8564e6f252267d5993712" exitCode=0 Feb 18 12:05:34 crc kubenswrapper[4717]: I0218 12:05:34.693300 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" event={"ID":"34c74aec-6f0a-4388-b1f8-46574087c035","Type":"ContainerDied","Data":"3f073da1afc165273f734b8dcf207477d269b699a4e8564e6f252267d5993712"} Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.701064 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"77683781-6580-4589-8869-bbaea0d6d8a0","Type":"ContainerStarted","Data":"7344b9be19f3f734555d7dda28d16cf8c4a193f32d5da74dad8c3dcd395ec86f"} Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.705017 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"41966c80-d352-4f94-b011-1ef922e3250f","Type":"ContainerStarted","Data":"cab26b55ef35bd1b43c696e4c32544d365fe72885f7a3c5f0cd2fbe9889e9649"} Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.707553 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" event={"ID":"34c74aec-6f0a-4388-b1f8-46574087c035","Type":"ContainerStarted","Data":"8f5872dd2f0e2109801e0061a67ff15112c762a403cda89b478c8ecbeb983b9c"} Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.707819 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.710311 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zzbgg" event={"ID":"d145d1aa-1d6c-4285-9670-42d3bb4ea1cd","Type":"ContainerStarted","Data":"b5a155d75050ec9b12732285c719984c152ae10c1e2e5554788fac86c6921573"} Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.710584 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.710636 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.731029 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.648903586 podStartE2EDuration="29.731005478s" podCreationTimestamp="2026-02-18 12:05:06 +0000 UTC" firstStartedPulling="2026-02-18 12:05:25.56592551 +0000 UTC m=+959.968026826" lastFinishedPulling="2026-02-18 12:05:34.648027402 +0000 UTC m=+969.050128718" observedRunningTime="2026-02-18 12:05:35.72389576 +0000 UTC m=+970.125997076" watchObservedRunningTime="2026-02-18 12:05:35.731005478 +0000 UTC m=+970.133106794" Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.748020 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.898359634 podStartE2EDuration="27.74800033s" podCreationTimestamp="2026-02-18 12:05:08 +0000 UTC" firstStartedPulling="2026-02-18 12:05:25.806834541 +0000 UTC m=+960.208935867" lastFinishedPulling="2026-02-18 12:05:34.656475247 +0000 UTC m=+969.058576563" observedRunningTime="2026-02-18 12:05:35.745498551 +0000 UTC m=+970.147599867" watchObservedRunningTime="2026-02-18 12:05:35.74800033 +0000 UTC m=+970.150101646" Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.769637 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" podStartSLOduration=3.714281757 podStartE2EDuration="39.769610642s" podCreationTimestamp="2026-02-18 12:04:56 +0000 UTC" firstStartedPulling="2026-02-18 12:04:57.378212073 +0000 UTC m=+931.780313389" lastFinishedPulling="2026-02-18 12:05:33.433540958 +0000 UTC m=+967.835642274" observedRunningTime="2026-02-18 12:05:35.763776969 +0000 UTC m=+970.165878295" watchObservedRunningTime="2026-02-18 12:05:35.769610642 +0000 UTC m=+970.171711958" Feb 18 12:05:35 crc kubenswrapper[4717]: I0218 12:05:35.795230 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zzbgg" podStartSLOduration=24.233183801 podStartE2EDuration="29.795207584s" podCreationTimestamp="2026-02-18 12:05:06 +0000 UTC" firstStartedPulling="2026-02-18 12:05:25.8909219 +0000 UTC m=+960.293023216" lastFinishedPulling="2026-02-18 12:05:31.452945683 +0000 UTC m=+965.855046999" observedRunningTime="2026-02-18 12:05:35.79003396 +0000 UTC m=+970.192135296" watchObservedRunningTime="2026-02-18 12:05:35.795207584 +0000 UTC m=+970.197308900" Feb 18 12:05:36 crc kubenswrapper[4717]: I0218 12:05:36.739392 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"468aa28e-8245-4024-815a-24d469dc17bf","Type":"ContainerStarted","Data":"ab59f199ed48325d66abc30818dd4af42a257558bd4238a3e87ab39177370fd9"} Feb 18 12:05:37 crc kubenswrapper[4717]: I0218 12:05:37.055075 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:37 crc kubenswrapper[4717]: I0218 12:05:37.105883 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:37 crc kubenswrapper[4717]: I0218 12:05:37.746473 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:37 crc kubenswrapper[4717]: I0218 12:05:37.762798 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:37 crc kubenswrapper[4717]: I0218 12:05:37.762876 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:37 crc kubenswrapper[4717]: I0218 12:05:37.966274 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.074048 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.237048 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c29fd"] Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.296666 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2s2l2"] Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.298802 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.312773 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.327337 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2s2l2"] Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.339737 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-htmxg"] Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.340953 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.345461 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.367905 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-htmxg"] Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.436819 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b720848b-1453-4de9-982e-de66099bb8f7-ovs-rundir\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.436868 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b720848b-1453-4de9-982e-de66099bb8f7-config\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.436909 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5jnl\" (UniqueName: \"kubernetes.io/projected/7f8549ae-e39d-4ba7-a0fc-86ad73587185-kube-api-access-m5jnl\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.436958 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-config\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.437008 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b720848b-1453-4de9-982e-de66099bb8f7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.437031 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.437067 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b720848b-1453-4de9-982e-de66099bb8f7-ovn-rundir\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.437090 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkcfk\" (UniqueName: \"kubernetes.io/projected/b720848b-1453-4de9-982e-de66099bb8f7-kube-api-access-tkcfk\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.437137 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b720848b-1453-4de9-982e-de66099bb8f7-combined-ca-bundle\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.437169 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.540681 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.540762 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b720848b-1453-4de9-982e-de66099bb8f7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.540833 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b720848b-1453-4de9-982e-de66099bb8f7-ovn-rundir\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.540892 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkcfk\" (UniqueName: \"kubernetes.io/projected/b720848b-1453-4de9-982e-de66099bb8f7-kube-api-access-tkcfk\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.540985 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b720848b-1453-4de9-982e-de66099bb8f7-combined-ca-bundle\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.541047 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.541100 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b720848b-1453-4de9-982e-de66099bb8f7-ovs-rundir\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.541143 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b720848b-1453-4de9-982e-de66099bb8f7-config\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.541228 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5jnl\" (UniqueName: \"kubernetes.io/projected/7f8549ae-e39d-4ba7-a0fc-86ad73587185-kube-api-access-m5jnl\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.541311 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-config\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.541765 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b720848b-1453-4de9-982e-de66099bb8f7-ovn-rundir\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.541829 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.542195 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b720848b-1453-4de9-982e-de66099bb8f7-ovs-rundir\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.542630 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-config\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.542933 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.543238 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b720848b-1453-4de9-982e-de66099bb8f7-config\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.552477 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b720848b-1453-4de9-982e-de66099bb8f7-combined-ca-bundle\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.565370 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkcfk\" (UniqueName: \"kubernetes.io/projected/b720848b-1453-4de9-982e-de66099bb8f7-kube-api-access-tkcfk\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.568391 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qdxkj"] Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.568665 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" podUID="34c74aec-6f0a-4388-b1f8-46574087c035" containerName="dnsmasq-dns" containerID="cri-o://8f5872dd2f0e2109801e0061a67ff15112c762a403cda89b478c8ecbeb983b9c" gracePeriod=10 Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.579014 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b720848b-1453-4de9-982e-de66099bb8f7-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-htmxg\" (UID: \"b720848b-1453-4de9-982e-de66099bb8f7\") " pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.598602 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5jnl\" (UniqueName: \"kubernetes.io/projected/7f8549ae-e39d-4ba7-a0fc-86ad73587185-kube-api-access-m5jnl\") pod \"dnsmasq-dns-7f896c8c65-2s2l2\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.610091 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d96db"] Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.632863 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.643671 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.645189 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.669516 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-htmxg" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.672968 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d96db"] Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.750777 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.750887 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.751781 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzg8s\" (UniqueName: \"kubernetes.io/projected/09838886-b5e2-4ae4-9673-2dc5f4d02da3-kube-api-access-xzg8s\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.751814 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.751876 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-config\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.845544 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.854644 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzg8s\" (UniqueName: \"kubernetes.io/projected/09838886-b5e2-4ae4-9673-2dc5f4d02da3-kube-api-access-xzg8s\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.854717 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.854793 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-config\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.854951 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.854976 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.865772 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.867378 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.868876 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.910479 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzg8s\" (UniqueName: \"kubernetes.io/projected/09838886-b5e2-4ae4-9673-2dc5f4d02da3-kube-api-access-xzg8s\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.949439 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-config\") pod \"dnsmasq-dns-86db49b7ff-d96db\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.956159 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-config\") pod \"967d5877-0ade-411e-8b7c-16730b8dc3a1\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.956493 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb2mx\" (UniqueName: \"kubernetes.io/projected/967d5877-0ade-411e-8b7c-16730b8dc3a1-kube-api-access-fb2mx\") pod \"967d5877-0ade-411e-8b7c-16730b8dc3a1\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.956531 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-dns-svc\") pod \"967d5877-0ade-411e-8b7c-16730b8dc3a1\" (UID: \"967d5877-0ade-411e-8b7c-16730b8dc3a1\") " Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.957410 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "967d5877-0ade-411e-8b7c-16730b8dc3a1" (UID: "967d5877-0ade-411e-8b7c-16730b8dc3a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.957793 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-config" (OuterVolumeSpecName: "config") pod "967d5877-0ade-411e-8b7c-16730b8dc3a1" (UID: "967d5877-0ade-411e-8b7c-16730b8dc3a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.963940 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967d5877-0ade-411e-8b7c-16730b8dc3a1-kube-api-access-fb2mx" (OuterVolumeSpecName: "kube-api-access-fb2mx") pod "967d5877-0ade-411e-8b7c-16730b8dc3a1" (UID: "967d5877-0ade-411e-8b7c-16730b8dc3a1"). InnerVolumeSpecName "kube-api-access-fb2mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.980696 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:38 crc kubenswrapper[4717]: I0218 12:05:38.992521 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.069801 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.069858 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb2mx\" (UniqueName: \"kubernetes.io/projected/967d5877-0ade-411e-8b7c-16730b8dc3a1-kube-api-access-fb2mx\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.069872 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/967d5877-0ade-411e-8b7c-16730b8dc3a1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.193005 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2s2l2"] Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.310953 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.312465 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.320045 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.324021 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5fhpl" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.324058 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.324221 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.324319 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.377692 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31402202-302e-46a9-b565-d7b8143153d5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.378402 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/31402202-302e-46a9-b565-d7b8143153d5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.379033 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31402202-302e-46a9-b565-d7b8143153d5-config\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.379090 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blgvv\" (UniqueName: \"kubernetes.io/projected/31402202-302e-46a9-b565-d7b8143153d5-kube-api-access-blgvv\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.379108 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31402202-302e-46a9-b565-d7b8143153d5-scripts\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.379133 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31402202-302e-46a9-b565-d7b8143153d5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.380555 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31402202-302e-46a9-b565-d7b8143153d5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.481520 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-htmxg"] Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.482524 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31402202-302e-46a9-b565-d7b8143153d5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.482590 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/31402202-302e-46a9-b565-d7b8143153d5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.482681 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31402202-302e-46a9-b565-d7b8143153d5-config\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.482716 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blgvv\" (UniqueName: \"kubernetes.io/projected/31402202-302e-46a9-b565-d7b8143153d5-kube-api-access-blgvv\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.482745 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31402202-302e-46a9-b565-d7b8143153d5-scripts\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.482764 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31402202-302e-46a9-b565-d7b8143153d5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.482791 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31402202-302e-46a9-b565-d7b8143153d5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.484882 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/31402202-302e-46a9-b565-d7b8143153d5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.485388 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31402202-302e-46a9-b565-d7b8143153d5-config\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.485669 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31402202-302e-46a9-b565-d7b8143153d5-scripts\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.490708 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/31402202-302e-46a9-b565-d7b8143153d5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.491107 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/31402202-302e-46a9-b565-d7b8143153d5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.497727 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31402202-302e-46a9-b565-d7b8143153d5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.517102 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blgvv\" (UniqueName: \"kubernetes.io/projected/31402202-302e-46a9-b565-d7b8143153d5-kube-api-access-blgvv\") pod \"ovn-northd-0\" (UID: \"31402202-302e-46a9-b565-d7b8143153d5\") " pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.655417 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.708232 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d96db"] Feb 18 12:05:39 crc kubenswrapper[4717]: W0218 12:05:39.716647 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09838886_b5e2_4ae4_9673_2dc5f4d02da3.slice/crio-19af81836c8ea4506643cf50ab6e90b9d39599246974bdbef8c7a5ff630cc8a9 WatchSource:0}: Error finding container 19af81836c8ea4506643cf50ab6e90b9d39599246974bdbef8c7a5ff630cc8a9: Status 404 returned error can't find the container with id 19af81836c8ea4506643cf50ab6e90b9d39599246974bdbef8c7a5ff630cc8a9 Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.788887 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" event={"ID":"09838886-b5e2-4ae4-9673-2dc5f4d02da3","Type":"ContainerStarted","Data":"19af81836c8ea4506643cf50ab6e90b9d39599246974bdbef8c7a5ff630cc8a9"} Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.791581 4717 generic.go:334] "Generic (PLEG): container finished" podID="34c74aec-6f0a-4388-b1f8-46574087c035" containerID="8f5872dd2f0e2109801e0061a67ff15112c762a403cda89b478c8ecbeb983b9c" exitCode=0 Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.791718 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" event={"ID":"34c74aec-6f0a-4388-b1f8-46574087c035","Type":"ContainerDied","Data":"8f5872dd2f0e2109801e0061a67ff15112c762a403cda89b478c8ecbeb983b9c"} Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.791766 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" event={"ID":"34c74aec-6f0a-4388-b1f8-46574087c035","Type":"ContainerDied","Data":"62c447fd45b0994cdf0c232fdb4468927a8a24e0eadd4d33a52360dba29d1802"} Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.791785 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62c447fd45b0994cdf0c232fdb4468927a8a24e0eadd4d33a52360dba29d1802" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.794148 4717 generic.go:334] "Generic (PLEG): container finished" podID="7f8549ae-e39d-4ba7-a0fc-86ad73587185" containerID="ba9f4733a03c39a849876209bd5a3835720d8a544c78df42102cd36dfba74187" exitCode=0 Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.794234 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" event={"ID":"7f8549ae-e39d-4ba7-a0fc-86ad73587185","Type":"ContainerDied","Data":"ba9f4733a03c39a849876209bd5a3835720d8a544c78df42102cd36dfba74187"} Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.794283 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" event={"ID":"7f8549ae-e39d-4ba7-a0fc-86ad73587185","Type":"ContainerStarted","Data":"4bf5f09c2187457f25d7c8f0e17ca1b35d9ccf6859291a3f33d31c7135360d4c"} Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.799027 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"636b0761-84e8-4d2f-88f4-4845e2a05f80","Type":"ContainerStarted","Data":"8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28"} Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.801283 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.801277 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c29fd" event={"ID":"967d5877-0ade-411e-8b7c-16730b8dc3a1","Type":"ContainerDied","Data":"22130b8499c251b79c3541000c41abc11ffb7d494cda00096068f44bcca27492"} Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.839307 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-htmxg" event={"ID":"b720848b-1453-4de9-982e-de66099bb8f7","Type":"ContainerStarted","Data":"d9d1ba85cc5e8cbc1a1fc7ce904e97d4fbc3cc2ca59b45a56f7e891b68314325"} Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.882649 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.923108 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c29fd"] Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.929926 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c29fd"] Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.996146 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-dns-svc\") pod \"34c74aec-6f0a-4388-b1f8-46574087c035\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.996248 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-config\") pod \"34c74aec-6f0a-4388-b1f8-46574087c035\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " Feb 18 12:05:39 crc kubenswrapper[4717]: I0218 12:05:39.996381 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhdln\" (UniqueName: \"kubernetes.io/projected/34c74aec-6f0a-4388-b1f8-46574087c035-kube-api-access-lhdln\") pod \"34c74aec-6f0a-4388-b1f8-46574087c035\" (UID: \"34c74aec-6f0a-4388-b1f8-46574087c035\") " Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.008798 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c74aec-6f0a-4388-b1f8-46574087c035-kube-api-access-lhdln" (OuterVolumeSpecName: "kube-api-access-lhdln") pod "34c74aec-6f0a-4388-b1f8-46574087c035" (UID: "34c74aec-6f0a-4388-b1f8-46574087c035"). InnerVolumeSpecName "kube-api-access-lhdln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.078990 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34c74aec-6f0a-4388-b1f8-46574087c035" (UID: "34c74aec-6f0a-4388-b1f8-46574087c035"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.102418 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhdln\" (UniqueName: \"kubernetes.io/projected/34c74aec-6f0a-4388-b1f8-46574087c035-kube-api-access-lhdln\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.102455 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.137959 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-config" (OuterVolumeSpecName: "config") pod "34c74aec-6f0a-4388-b1f8-46574087c035" (UID: "34c74aec-6f0a-4388-b1f8-46574087c035"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.187016 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.204889 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34c74aec-6f0a-4388-b1f8-46574087c035-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:40 crc kubenswrapper[4717]: W0218 12:05:40.205744 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31402202_302e_46a9_b565_d7b8143153d5.slice/crio-77a34ffff0c42003668761612d9e6a3f8c66f54821c06eada6fd9b6396b6f14a WatchSource:0}: Error finding container 77a34ffff0c42003668761612d9e6a3f8c66f54821c06eada6fd9b6396b6f14a: Status 404 returned error can't find the container with id 77a34ffff0c42003668761612d9e6a3f8c66f54821c06eada6fd9b6396b6f14a Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.853285 4717 generic.go:334] "Generic (PLEG): container finished" podID="09838886-b5e2-4ae4-9673-2dc5f4d02da3" containerID="1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864" exitCode=0 Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.853360 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" event={"ID":"09838886-b5e2-4ae4-9673-2dc5f4d02da3","Type":"ContainerDied","Data":"1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864"} Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.859759 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" event={"ID":"7f8549ae-e39d-4ba7-a0fc-86ad73587185","Type":"ContainerStarted","Data":"45944e00d781e6df9b6ab2db976ae08e29a408ed13b79b38a50c1556bcc66783"} Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.859846 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.867755 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-htmxg" event={"ID":"b720848b-1453-4de9-982e-de66099bb8f7","Type":"ContainerStarted","Data":"25a7219eec43c529b23bb4c4442e9ce34239badd3dbf2a7662674192e8c1843c"} Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.871118 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"31402202-302e-46a9-b565-d7b8143153d5","Type":"ContainerStarted","Data":"77a34ffff0c42003668761612d9e6a3f8c66f54821c06eada6fd9b6396b6f14a"} Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.874818 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qdxkj" Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.875672 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9e12df56-53ef-42bc-9f15-2c7a89b391d1","Type":"ContainerStarted","Data":"447f575366018d439f67930e5755eab9ace03689ba4bc1fd31f074dad5142dbd"} Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.913442 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-htmxg" podStartSLOduration=2.91341061 podStartE2EDuration="2.91341061s" podCreationTimestamp="2026-02-18 12:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:05:40.900944044 +0000 UTC m=+975.303045390" watchObservedRunningTime="2026-02-18 12:05:40.91341061 +0000 UTC m=+975.315511936" Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.937797 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" podStartSLOduration=2.937768548 podStartE2EDuration="2.937768548s" podCreationTimestamp="2026-02-18 12:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:05:40.935593577 +0000 UTC m=+975.337694893" watchObservedRunningTime="2026-02-18 12:05:40.937768548 +0000 UTC m=+975.339869864" Feb 18 12:05:40 crc kubenswrapper[4717]: I0218 12:05:40.994025 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qdxkj"] Feb 18 12:05:41 crc kubenswrapper[4717]: I0218 12:05:41.002153 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:41 crc kubenswrapper[4717]: I0218 12:05:41.003452 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:41 crc kubenswrapper[4717]: I0218 12:05:41.023031 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qdxkj"] Feb 18 12:05:41 crc kubenswrapper[4717]: I0218 12:05:41.052510 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34c74aec-6f0a-4388-b1f8-46574087c035" path="/var/lib/kubelet/pods/34c74aec-6f0a-4388-b1f8-46574087c035/volumes" Feb 18 12:05:41 crc kubenswrapper[4717]: I0218 12:05:41.053660 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="967d5877-0ade-411e-8b7c-16730b8dc3a1" path="/var/lib/kubelet/pods/967d5877-0ade-411e-8b7c-16730b8dc3a1/volumes" Feb 18 12:05:41 crc kubenswrapper[4717]: I0218 12:05:41.054148 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 12:05:41 crc kubenswrapper[4717]: I0218 12:05:41.140641 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:41 crc kubenswrapper[4717]: I0218 12:05:41.886193 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" event={"ID":"09838886-b5e2-4ae4-9673-2dc5f4d02da3","Type":"ContainerStarted","Data":"9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da"} Feb 18 12:05:41 crc kubenswrapper[4717]: I0218 12:05:41.917121 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" podStartSLOduration=3.917102711 podStartE2EDuration="3.917102711s" podCreationTimestamp="2026-02-18 12:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:05:41.912510163 +0000 UTC m=+976.314611479" watchObservedRunningTime="2026-02-18 12:05:41.917102711 +0000 UTC m=+976.319204027" Feb 18 12:05:42 crc kubenswrapper[4717]: I0218 12:05:42.017565 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 12:05:42 crc kubenswrapper[4717]: I0218 12:05:42.897379 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"31402202-302e-46a9-b565-d7b8143153d5","Type":"ContainerStarted","Data":"2d13bb41ae4e2a818e6e8c18e78f5608057cb80a24b66b8c5d31ce8ecbb2641f"} Feb 18 12:05:42 crc kubenswrapper[4717]: I0218 12:05:42.897942 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"31402202-302e-46a9-b565-d7b8143153d5","Type":"ContainerStarted","Data":"88f30d3124fa7800ac4f899cd141f64667f0ece1ac7c8901866397aea3c9007f"} Feb 18 12:05:42 crc kubenswrapper[4717]: I0218 12:05:42.897997 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:42 crc kubenswrapper[4717]: I0218 12:05:42.948771 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.5687604889999998 podStartE2EDuration="3.948746598s" podCreationTimestamp="2026-02-18 12:05:39 +0000 UTC" firstStartedPulling="2026-02-18 12:05:40.208996505 +0000 UTC m=+974.611097821" lastFinishedPulling="2026-02-18 12:05:41.588982614 +0000 UTC m=+975.991083930" observedRunningTime="2026-02-18 12:05:42.934616715 +0000 UTC m=+977.336718031" watchObservedRunningTime="2026-02-18 12:05:42.948746598 +0000 UTC m=+977.350847914" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.614275 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2s2l2"] Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.614632 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" podUID="7f8549ae-e39d-4ba7-a0fc-86ad73587185" containerName="dnsmasq-dns" containerID="cri-o://45944e00d781e6df9b6ab2db976ae08e29a408ed13b79b38a50c1556bcc66783" gracePeriod=10 Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.637801 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.663956 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-qb9hn"] Feb 18 12:05:43 crc kubenswrapper[4717]: E0218 12:05:43.664527 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c74aec-6f0a-4388-b1f8-46574087c035" containerName="init" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.664551 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c74aec-6f0a-4388-b1f8-46574087c035" containerName="init" Feb 18 12:05:43 crc kubenswrapper[4717]: E0218 12:05:43.664579 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c74aec-6f0a-4388-b1f8-46574087c035" containerName="dnsmasq-dns" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.664588 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c74aec-6f0a-4388-b1f8-46574087c035" containerName="dnsmasq-dns" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.664807 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c74aec-6f0a-4388-b1f8-46574087c035" containerName="dnsmasq-dns" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.666147 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.690071 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-qb9hn"] Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.792628 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.792755 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-config\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.792801 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.792839 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-dns-svc\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.792873 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddkx\" (UniqueName: \"kubernetes.io/projected/ea526311-abc8-4443-8345-75f047f7909a-kube-api-access-mddkx\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.894839 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddkx\" (UniqueName: \"kubernetes.io/projected/ea526311-abc8-4443-8345-75f047f7909a-kube-api-access-mddkx\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.894925 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.894997 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-config\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.895040 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.895076 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-dns-svc\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.896122 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-dns-svc\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.896285 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.898367 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.898556 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-config\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.916067 4717 generic.go:334] "Generic (PLEG): container finished" podID="7f8549ae-e39d-4ba7-a0fc-86ad73587185" containerID="45944e00d781e6df9b6ab2db976ae08e29a408ed13b79b38a50c1556bcc66783" exitCode=0 Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.916206 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" event={"ID":"7f8549ae-e39d-4ba7-a0fc-86ad73587185","Type":"ContainerDied","Data":"45944e00d781e6df9b6ab2db976ae08e29a408ed13b79b38a50c1556bcc66783"} Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.925020 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddkx\" (UniqueName: \"kubernetes.io/projected/ea526311-abc8-4443-8345-75f047f7909a-kube-api-access-mddkx\") pod \"dnsmasq-dns-698758b865-qb9hn\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.935192 4717 generic.go:334] "Generic (PLEG): container finished" podID="9e12df56-53ef-42bc-9f15-2c7a89b391d1" containerID="447f575366018d439f67930e5755eab9ace03689ba4bc1fd31f074dad5142dbd" exitCode=0 Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.935507 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9e12df56-53ef-42bc-9f15-2c7a89b391d1","Type":"ContainerDied","Data":"447f575366018d439f67930e5755eab9ace03689ba4bc1fd31f074dad5142dbd"} Feb 18 12:05:43 crc kubenswrapper[4717]: I0218 12:05:43.937704 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.021677 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.133539 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.203215 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5jnl\" (UniqueName: \"kubernetes.io/projected/7f8549ae-e39d-4ba7-a0fc-86ad73587185-kube-api-access-m5jnl\") pod \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.203447 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-config\") pod \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.203544 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-ovsdbserver-sb\") pod \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.203594 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-dns-svc\") pod \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\" (UID: \"7f8549ae-e39d-4ba7-a0fc-86ad73587185\") " Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.210808 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8549ae-e39d-4ba7-a0fc-86ad73587185-kube-api-access-m5jnl" (OuterVolumeSpecName: "kube-api-access-m5jnl") pod "7f8549ae-e39d-4ba7-a0fc-86ad73587185" (UID: "7f8549ae-e39d-4ba7-a0fc-86ad73587185"). InnerVolumeSpecName "kube-api-access-m5jnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.311562 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5jnl\" (UniqueName: \"kubernetes.io/projected/7f8549ae-e39d-4ba7-a0fc-86ad73587185-kube-api-access-m5jnl\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.394684 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f8549ae-e39d-4ba7-a0fc-86ad73587185" (UID: "7f8549ae-e39d-4ba7-a0fc-86ad73587185"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.395352 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f8549ae-e39d-4ba7-a0fc-86ad73587185" (UID: "7f8549ae-e39d-4ba7-a0fc-86ad73587185"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.407062 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-config" (OuterVolumeSpecName: "config") pod "7f8549ae-e39d-4ba7-a0fc-86ad73587185" (UID: "7f8549ae-e39d-4ba7-a0fc-86ad73587185"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.412902 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.412931 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.412944 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8549ae-e39d-4ba7-a0fc-86ad73587185-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.723959 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-qb9hn"] Feb 18 12:05:44 crc kubenswrapper[4717]: W0218 12:05:44.736006 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea526311_abc8_4443_8345_75f047f7909a.slice/crio-46e04cf3c3596bcd1e15afbaa01575b3a261b2e0f90cb1e39119321a5306afb2 WatchSource:0}: Error finding container 46e04cf3c3596bcd1e15afbaa01575b3a261b2e0f90cb1e39119321a5306afb2: Status 404 returned error can't find the container with id 46e04cf3c3596bcd1e15afbaa01575b3a261b2e0f90cb1e39119321a5306afb2 Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.792648 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 12:05:44 crc kubenswrapper[4717]: E0218 12:05:44.793066 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8549ae-e39d-4ba7-a0fc-86ad73587185" containerName="dnsmasq-dns" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.793088 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8549ae-e39d-4ba7-a0fc-86ad73587185" containerName="dnsmasq-dns" Feb 18 12:05:44 crc kubenswrapper[4717]: E0218 12:05:44.793119 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8549ae-e39d-4ba7-a0fc-86ad73587185" containerName="init" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.793128 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8549ae-e39d-4ba7-a0fc-86ad73587185" containerName="init" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.793377 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8549ae-e39d-4ba7-a0fc-86ad73587185" containerName="dnsmasq-dns" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.798516 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.801841 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.801891 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.802062 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.802179 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-rl8px" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.822428 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.922906 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.922991 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9102b2e6-400b-4ba1-97a2-eb5be85f778a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.923035 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blqfh\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-kube-api-access-blqfh\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.923091 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9102b2e6-400b-4ba1-97a2-eb5be85f778a-cache\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.923135 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.923159 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9102b2e6-400b-4ba1-97a2-eb5be85f778a-lock\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.943676 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-qb9hn" event={"ID":"ea526311-abc8-4443-8345-75f047f7909a","Type":"ContainerStarted","Data":"46e04cf3c3596bcd1e15afbaa01575b3a261b2e0f90cb1e39119321a5306afb2"} Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.947087 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9e12df56-53ef-42bc-9f15-2c7a89b391d1","Type":"ContainerStarted","Data":"dc86aa30eecd87bf8af210baa16e5b1c68d8c517569d50cd72e44bb36048e7d3"} Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.951738 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" event={"ID":"7f8549ae-e39d-4ba7-a0fc-86ad73587185","Type":"ContainerDied","Data":"4bf5f09c2187457f25d7c8f0e17ca1b35d9ccf6859291a3f33d31c7135360d4c"} Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.951801 4717 scope.go:117] "RemoveContainer" containerID="45944e00d781e6df9b6ab2db976ae08e29a408ed13b79b38a50c1556bcc66783" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.951811 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-2s2l2" Feb 18 12:05:44 crc kubenswrapper[4717]: I0218 12:05:44.977953 4717 scope.go:117] "RemoveContainer" containerID="ba9f4733a03c39a849876209bd5a3835720d8a544c78df42102cd36dfba74187" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.013043 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371989.841785 podStartE2EDuration="47.012989891s" podCreationTimestamp="2026-02-18 12:04:58 +0000 UTC" firstStartedPulling="2026-02-18 12:05:05.223192503 +0000 UTC m=+939.625293819" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:05:44.972080103 +0000 UTC m=+979.374181419" watchObservedRunningTime="2026-02-18 12:05:45.012989891 +0000 UTC m=+979.415091207" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.025993 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9102b2e6-400b-4ba1-97a2-eb5be85f778a-cache\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.026052 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.026079 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9102b2e6-400b-4ba1-97a2-eb5be85f778a-lock\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.026185 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.026250 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9102b2e6-400b-4ba1-97a2-eb5be85f778a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.026319 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blqfh\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-kube-api-access-blqfh\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.028047 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9102b2e6-400b-4ba1-97a2-eb5be85f778a-cache\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.028284 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: E0218 12:05:45.029298 4717 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 12:05:45 crc kubenswrapper[4717]: E0218 12:05:45.029343 4717 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 12:05:45 crc kubenswrapper[4717]: E0218 12:05:45.029400 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift podName:9102b2e6-400b-4ba1-97a2-eb5be85f778a nodeName:}" failed. No retries permitted until 2026-02-18 12:05:45.529378567 +0000 UTC m=+979.931479883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift") pod "swift-storage-0" (UID: "9102b2e6-400b-4ba1-97a2-eb5be85f778a") : configmap "swift-ring-files" not found Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.030188 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9102b2e6-400b-4ba1-97a2-eb5be85f778a-lock\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.033736 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2s2l2"] Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.038772 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9102b2e6-400b-4ba1-97a2-eb5be85f778a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.052102 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blqfh\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-kube-api-access-blqfh\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.054607 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-2s2l2"] Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.060232 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.297791 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-66d7l"] Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.299176 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.301403 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.301541 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.303010 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.316087 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-66d7l"] Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.332056 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8fqd\" (UniqueName: \"kubernetes.io/projected/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-kube-api-access-l8fqd\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.332182 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-scripts\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.332221 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-ring-data-devices\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.332339 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-etc-swift\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.332371 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-combined-ca-bundle\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.332396 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-dispersionconf\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.332421 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-swiftconf\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.434334 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8fqd\" (UniqueName: \"kubernetes.io/projected/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-kube-api-access-l8fqd\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.434419 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-scripts\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.434455 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-ring-data-devices\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.434524 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-etc-swift\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.434561 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-combined-ca-bundle\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.434589 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-dispersionconf\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.434614 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-swiftconf\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.435453 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-etc-swift\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.436161 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-ring-data-devices\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.436517 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-scripts\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.440415 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-swiftconf\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.441941 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-dispersionconf\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.444342 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-combined-ca-bundle\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.457904 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8fqd\" (UniqueName: \"kubernetes.io/projected/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-kube-api-access-l8fqd\") pod \"swift-ring-rebalance-66d7l\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.536449 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:45 crc kubenswrapper[4717]: E0218 12:05:45.536704 4717 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 12:05:45 crc kubenswrapper[4717]: E0218 12:05:45.536744 4717 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 12:05:45 crc kubenswrapper[4717]: E0218 12:05:45.536824 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift podName:9102b2e6-400b-4ba1-97a2-eb5be85f778a nodeName:}" failed. No retries permitted until 2026-02-18 12:05:46.536798772 +0000 UTC m=+980.938900078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift") pod "swift-storage-0" (UID: "9102b2e6-400b-4ba1-97a2-eb5be85f778a") : configmap "swift-ring-files" not found Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.621553 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.965549 4717 generic.go:334] "Generic (PLEG): container finished" podID="ea526311-abc8-4443-8345-75f047f7909a" containerID="d09269bc43bd80baa3a585d8a05be340577cbbb999e61a37c77e3afd24f1a700" exitCode=0 Feb 18 12:05:45 crc kubenswrapper[4717]: I0218 12:05:45.965636 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-qb9hn" event={"ID":"ea526311-abc8-4443-8345-75f047f7909a","Type":"ContainerDied","Data":"d09269bc43bd80baa3a585d8a05be340577cbbb999e61a37c77e3afd24f1a700"} Feb 18 12:05:46 crc kubenswrapper[4717]: I0218 12:05:46.108342 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-66d7l"] Feb 18 12:05:46 crc kubenswrapper[4717]: W0218 12:05:46.142179 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac3ad8c1_04a7_46f5_9c76_98c92e3c2158.slice/crio-4546b6881b07ff3dfa1ae986f890e1a4c63e1870c7810dd5cd7c67546f809676 WatchSource:0}: Error finding container 4546b6881b07ff3dfa1ae986f890e1a4c63e1870c7810dd5cd7c67546f809676: Status 404 returned error can't find the container with id 4546b6881b07ff3dfa1ae986f890e1a4c63e1870c7810dd5cd7c67546f809676 Feb 18 12:05:46 crc kubenswrapper[4717]: I0218 12:05:46.555024 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:46 crc kubenswrapper[4717]: E0218 12:05:46.555451 4717 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 12:05:46 crc kubenswrapper[4717]: E0218 12:05:46.555475 4717 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 12:05:46 crc kubenswrapper[4717]: E0218 12:05:46.555536 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift podName:9102b2e6-400b-4ba1-97a2-eb5be85f778a nodeName:}" failed. No retries permitted until 2026-02-18 12:05:48.55551776 +0000 UTC m=+982.957619076 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift") pod "swift-storage-0" (UID: "9102b2e6-400b-4ba1-97a2-eb5be85f778a") : configmap "swift-ring-files" not found Feb 18 12:05:46 crc kubenswrapper[4717]: I0218 12:05:46.976463 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-qb9hn" event={"ID":"ea526311-abc8-4443-8345-75f047f7909a","Type":"ContainerStarted","Data":"4b0ddd3de70c36b4da2c54c70b80014dc7288066de9d4a3f8d23f2603cacfc7c"} Feb 18 12:05:46 crc kubenswrapper[4717]: I0218 12:05:46.977022 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:46 crc kubenswrapper[4717]: I0218 12:05:46.978806 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-66d7l" event={"ID":"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158","Type":"ContainerStarted","Data":"4546b6881b07ff3dfa1ae986f890e1a4c63e1870c7810dd5cd7c67546f809676"} Feb 18 12:05:47 crc kubenswrapper[4717]: I0218 12:05:47.010042 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-qb9hn" podStartSLOduration=4.010009553 podStartE2EDuration="4.010009553s" podCreationTimestamp="2026-02-18 12:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:05:47.000114418 +0000 UTC m=+981.402215744" watchObservedRunningTime="2026-02-18 12:05:47.010009553 +0000 UTC m=+981.412110869" Feb 18 12:05:47 crc kubenswrapper[4717]: I0218 12:05:47.066411 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8549ae-e39d-4ba7-a0fc-86ad73587185" path="/var/lib/kubelet/pods/7f8549ae-e39d-4ba7-a0fc-86ad73587185/volumes" Feb 18 12:05:48 crc kubenswrapper[4717]: I0218 12:05:48.599668 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:48 crc kubenswrapper[4717]: E0218 12:05:48.599961 4717 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 12:05:48 crc kubenswrapper[4717]: E0218 12:05:48.600240 4717 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 12:05:48 crc kubenswrapper[4717]: E0218 12:05:48.600336 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift podName:9102b2e6-400b-4ba1-97a2-eb5be85f778a nodeName:}" failed. No retries permitted until 2026-02-18 12:05:52.600315022 +0000 UTC m=+987.002416338 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift") pod "swift-storage-0" (UID: "9102b2e6-400b-4ba1-97a2-eb5be85f778a") : configmap "swift-ring-files" not found Feb 18 12:05:48 crc kubenswrapper[4717]: I0218 12:05:48.982622 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.493283 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.493382 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.589082 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.664573 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-s47d9"] Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.666113 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s47d9" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.670330 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.674559 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s47d9"] Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.730810 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-operator-scripts\") pod \"root-account-create-update-s47d9\" (UID: \"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27\") " pod="openstack/root-account-create-update-s47d9" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.730947 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnfwq\" (UniqueName: \"kubernetes.io/projected/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-kube-api-access-pnfwq\") pod \"root-account-create-update-s47d9\" (UID: \"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27\") " pod="openstack/root-account-create-update-s47d9" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.833073 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnfwq\" (UniqueName: \"kubernetes.io/projected/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-kube-api-access-pnfwq\") pod \"root-account-create-update-s47d9\" (UID: \"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27\") " pod="openstack/root-account-create-update-s47d9" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.833205 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-operator-scripts\") pod \"root-account-create-update-s47d9\" (UID: \"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27\") " pod="openstack/root-account-create-update-s47d9" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.834481 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-operator-scripts\") pod \"root-account-create-update-s47d9\" (UID: \"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27\") " pod="openstack/root-account-create-update-s47d9" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.858380 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnfwq\" (UniqueName: \"kubernetes.io/projected/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-kube-api-access-pnfwq\") pod \"root-account-create-update-s47d9\" (UID: \"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27\") " pod="openstack/root-account-create-update-s47d9" Feb 18 12:05:49 crc kubenswrapper[4717]: I0218 12:05:49.994953 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s47d9" Feb 18 12:05:50 crc kubenswrapper[4717]: I0218 12:05:50.133500 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 12:05:50 crc kubenswrapper[4717]: W0218 12:05:50.492832 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8e0ed9f_aac4_4bb8_8659_c3f71eddce27.slice/crio-2756cbfd85648c22c6a5eb498333b3ca4f908ccd30585bc506fca0e6db097dbb WatchSource:0}: Error finding container 2756cbfd85648c22c6a5eb498333b3ca4f908ccd30585bc506fca0e6db097dbb: Status 404 returned error can't find the container with id 2756cbfd85648c22c6a5eb498333b3ca4f908ccd30585bc506fca0e6db097dbb Feb 18 12:05:50 crc kubenswrapper[4717]: I0218 12:05:50.497316 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s47d9"] Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.013025 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s47d9" event={"ID":"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27","Type":"ContainerStarted","Data":"ace16603515589089c74f1a1a02687c0ba35342b5596e46465e01d7deda1693e"} Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.013553 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s47d9" event={"ID":"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27","Type":"ContainerStarted","Data":"2756cbfd85648c22c6a5eb498333b3ca4f908ccd30585bc506fca0e6db097dbb"} Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.018664 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-66d7l" event={"ID":"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158","Type":"ContainerStarted","Data":"95deee24517dbce2ac708850e533ba80de7a7bc3f03609053e9db3118ddf9f0a"} Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.032302 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-s47d9" podStartSLOduration=2.032280283 podStartE2EDuration="2.032280283s" podCreationTimestamp="2026-02-18 12:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:05:51.030468143 +0000 UTC m=+985.432569469" watchObservedRunningTime="2026-02-18 12:05:51.032280283 +0000 UTC m=+985.434381599" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.054442 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-66d7l" podStartSLOduration=2.275688094 podStartE2EDuration="6.054414769s" podCreationTimestamp="2026-02-18 12:05:45 +0000 UTC" firstStartedPulling="2026-02-18 12:05:46.144899858 +0000 UTC m=+980.547001174" lastFinishedPulling="2026-02-18 12:05:49.923626533 +0000 UTC m=+984.325727849" observedRunningTime="2026-02-18 12:05:51.04943669 +0000 UTC m=+985.451538006" watchObservedRunningTime="2026-02-18 12:05:51.054414769 +0000 UTC m=+985.456516085" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.225790 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4rv27"] Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.227018 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4rv27" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.236421 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4rv27"] Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.273885 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14958cee-73ca-44f3-a12e-b505644a4429-operator-scripts\") pod \"glance-db-create-4rv27\" (UID: \"14958cee-73ca-44f3-a12e-b505644a4429\") " pod="openstack/glance-db-create-4rv27" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.274079 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcwtm\" (UniqueName: \"kubernetes.io/projected/14958cee-73ca-44f3-a12e-b505644a4429-kube-api-access-vcwtm\") pod \"glance-db-create-4rv27\" (UID: \"14958cee-73ca-44f3-a12e-b505644a4429\") " pod="openstack/glance-db-create-4rv27" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.345142 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6995-account-create-update-gv7nt"] Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.346380 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6995-account-create-update-gv7nt" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.348938 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.364268 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6995-account-create-update-gv7nt"] Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.375761 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7550ba5-c203-4016-90a7-4273e4a8688a-operator-scripts\") pod \"glance-6995-account-create-update-gv7nt\" (UID: \"c7550ba5-c203-4016-90a7-4273e4a8688a\") " pod="openstack/glance-6995-account-create-update-gv7nt" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.375890 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlb6s\" (UniqueName: \"kubernetes.io/projected/c7550ba5-c203-4016-90a7-4273e4a8688a-kube-api-access-vlb6s\") pod \"glance-6995-account-create-update-gv7nt\" (UID: \"c7550ba5-c203-4016-90a7-4273e4a8688a\") " pod="openstack/glance-6995-account-create-update-gv7nt" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.375950 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcwtm\" (UniqueName: \"kubernetes.io/projected/14958cee-73ca-44f3-a12e-b505644a4429-kube-api-access-vcwtm\") pod \"glance-db-create-4rv27\" (UID: \"14958cee-73ca-44f3-a12e-b505644a4429\") " pod="openstack/glance-db-create-4rv27" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.376169 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14958cee-73ca-44f3-a12e-b505644a4429-operator-scripts\") pod \"glance-db-create-4rv27\" (UID: \"14958cee-73ca-44f3-a12e-b505644a4429\") " pod="openstack/glance-db-create-4rv27" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.377179 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14958cee-73ca-44f3-a12e-b505644a4429-operator-scripts\") pod \"glance-db-create-4rv27\" (UID: \"14958cee-73ca-44f3-a12e-b505644a4429\") " pod="openstack/glance-db-create-4rv27" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.395955 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcwtm\" (UniqueName: \"kubernetes.io/projected/14958cee-73ca-44f3-a12e-b505644a4429-kube-api-access-vcwtm\") pod \"glance-db-create-4rv27\" (UID: \"14958cee-73ca-44f3-a12e-b505644a4429\") " pod="openstack/glance-db-create-4rv27" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.479692 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7550ba5-c203-4016-90a7-4273e4a8688a-operator-scripts\") pod \"glance-6995-account-create-update-gv7nt\" (UID: \"c7550ba5-c203-4016-90a7-4273e4a8688a\") " pod="openstack/glance-6995-account-create-update-gv7nt" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.480284 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlb6s\" (UniqueName: \"kubernetes.io/projected/c7550ba5-c203-4016-90a7-4273e4a8688a-kube-api-access-vlb6s\") pod \"glance-6995-account-create-update-gv7nt\" (UID: \"c7550ba5-c203-4016-90a7-4273e4a8688a\") " pod="openstack/glance-6995-account-create-update-gv7nt" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.480540 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7550ba5-c203-4016-90a7-4273e4a8688a-operator-scripts\") pod \"glance-6995-account-create-update-gv7nt\" (UID: \"c7550ba5-c203-4016-90a7-4273e4a8688a\") " pod="openstack/glance-6995-account-create-update-gv7nt" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.503795 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlb6s\" (UniqueName: \"kubernetes.io/projected/c7550ba5-c203-4016-90a7-4273e4a8688a-kube-api-access-vlb6s\") pod \"glance-6995-account-create-update-gv7nt\" (UID: \"c7550ba5-c203-4016-90a7-4273e4a8688a\") " pod="openstack/glance-6995-account-create-update-gv7nt" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.588907 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4rv27" Feb 18 12:05:51 crc kubenswrapper[4717]: I0218 12:05:51.671330 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6995-account-create-update-gv7nt" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.026515 4717 generic.go:334] "Generic (PLEG): container finished" podID="b8e0ed9f-aac4-4bb8-8659-c3f71eddce27" containerID="ace16603515589089c74f1a1a02687c0ba35342b5596e46465e01d7deda1693e" exitCode=0 Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.026620 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s47d9" event={"ID":"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27","Type":"ContainerDied","Data":"ace16603515589089c74f1a1a02687c0ba35342b5596e46465e01d7deda1693e"} Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.074505 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4rv27"] Feb 18 12:05:52 crc kubenswrapper[4717]: W0218 12:05:52.099780 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14958cee_73ca_44f3_a12e_b505644a4429.slice/crio-778af7dcca7f33034dc3b0466ea1d506096153892b05cb99eb57b28573f18162 WatchSource:0}: Error finding container 778af7dcca7f33034dc3b0466ea1d506096153892b05cb99eb57b28573f18162: Status 404 returned error can't find the container with id 778af7dcca7f33034dc3b0466ea1d506096153892b05cb99eb57b28573f18162 Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.147783 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-gfpjb"] Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.149242 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gfpjb" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.155839 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gfpjb"] Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.211874 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6995-account-create-update-gv7nt"] Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.249621 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-af1f-account-create-update-d8v98"] Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.251016 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-af1f-account-create-update-d8v98" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.253039 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.264197 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-af1f-account-create-update-d8v98"] Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.302611 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-operator-scripts\") pod \"keystone-db-create-gfpjb\" (UID: \"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e\") " pod="openstack/keystone-db-create-gfpjb" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.303344 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wcf\" (UniqueName: \"kubernetes.io/projected/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-kube-api-access-c7wcf\") pod \"keystone-db-create-gfpjb\" (UID: \"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e\") " pod="openstack/keystone-db-create-gfpjb" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.383742 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wlldn"] Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.384858 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wlldn" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.396627 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wlldn"] Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.439597 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7wcf\" (UniqueName: \"kubernetes.io/projected/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-kube-api-access-c7wcf\") pod \"keystone-db-create-gfpjb\" (UID: \"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e\") " pod="openstack/keystone-db-create-gfpjb" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.439979 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe423299-da9c-4578-90bc-9c6e13b7acf6-operator-scripts\") pod \"keystone-af1f-account-create-update-d8v98\" (UID: \"fe423299-da9c-4578-90bc-9c6e13b7acf6\") " pod="openstack/keystone-af1f-account-create-update-d8v98" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.440094 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-operator-scripts\") pod \"keystone-db-create-gfpjb\" (UID: \"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e\") " pod="openstack/keystone-db-create-gfpjb" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.440231 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbgkz\" (UniqueName: \"kubernetes.io/projected/fe423299-da9c-4578-90bc-9c6e13b7acf6-kube-api-access-kbgkz\") pod \"keystone-af1f-account-create-update-d8v98\" (UID: \"fe423299-da9c-4578-90bc-9c6e13b7acf6\") " pod="openstack/keystone-af1f-account-create-update-d8v98" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.442384 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-operator-scripts\") pod \"keystone-db-create-gfpjb\" (UID: \"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e\") " pod="openstack/keystone-db-create-gfpjb" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.460622 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7wcf\" (UniqueName: \"kubernetes.io/projected/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-kube-api-access-c7wcf\") pod \"keystone-db-create-gfpjb\" (UID: \"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e\") " pod="openstack/keystone-db-create-gfpjb" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.545531 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe423299-da9c-4578-90bc-9c6e13b7acf6-operator-scripts\") pod \"keystone-af1f-account-create-update-d8v98\" (UID: \"fe423299-da9c-4578-90bc-9c6e13b7acf6\") " pod="openstack/keystone-af1f-account-create-update-d8v98" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.546410 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de5a667-32ab-4232-b42c-071d6a4347f9-operator-scripts\") pod \"placement-db-create-wlldn\" (UID: \"5de5a667-32ab-4232-b42c-071d6a4347f9\") " pod="openstack/placement-db-create-wlldn" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.546452 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq4rs\" (UniqueName: \"kubernetes.io/projected/5de5a667-32ab-4232-b42c-071d6a4347f9-kube-api-access-fq4rs\") pod \"placement-db-create-wlldn\" (UID: \"5de5a667-32ab-4232-b42c-071d6a4347f9\") " pod="openstack/placement-db-create-wlldn" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.546475 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbgkz\" (UniqueName: \"kubernetes.io/projected/fe423299-da9c-4578-90bc-9c6e13b7acf6-kube-api-access-kbgkz\") pod \"keystone-af1f-account-create-update-d8v98\" (UID: \"fe423299-da9c-4578-90bc-9c6e13b7acf6\") " pod="openstack/keystone-af1f-account-create-update-d8v98" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.546326 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe423299-da9c-4578-90bc-9c6e13b7acf6-operator-scripts\") pod \"keystone-af1f-account-create-update-d8v98\" (UID: \"fe423299-da9c-4578-90bc-9c6e13b7acf6\") " pod="openstack/keystone-af1f-account-create-update-d8v98" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.551641 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e8be-account-create-update-5wd2v"] Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.553168 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e8be-account-create-update-5wd2v" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.556651 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.567943 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e8be-account-create-update-5wd2v"] Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.568742 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbgkz\" (UniqueName: \"kubernetes.io/projected/fe423299-da9c-4578-90bc-9c6e13b7acf6-kube-api-access-kbgkz\") pod \"keystone-af1f-account-create-update-d8v98\" (UID: \"fe423299-da9c-4578-90bc-9c6e13b7acf6\") " pod="openstack/keystone-af1f-account-create-update-d8v98" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.583279 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gfpjb" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.648478 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv8lk\" (UniqueName: \"kubernetes.io/projected/e61481b3-51b8-4663-9dbd-c2abb65388df-kube-api-access-xv8lk\") pod \"placement-e8be-account-create-update-5wd2v\" (UID: \"e61481b3-51b8-4663-9dbd-c2abb65388df\") " pod="openstack/placement-e8be-account-create-update-5wd2v" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.649037 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de5a667-32ab-4232-b42c-071d6a4347f9-operator-scripts\") pod \"placement-db-create-wlldn\" (UID: \"5de5a667-32ab-4232-b42c-071d6a4347f9\") " pod="openstack/placement-db-create-wlldn" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.649073 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.649096 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq4rs\" (UniqueName: \"kubernetes.io/projected/5de5a667-32ab-4232-b42c-071d6a4347f9-kube-api-access-fq4rs\") pod \"placement-db-create-wlldn\" (UID: \"5de5a667-32ab-4232-b42c-071d6a4347f9\") " pod="openstack/placement-db-create-wlldn" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.649162 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e61481b3-51b8-4663-9dbd-c2abb65388df-operator-scripts\") pod \"placement-e8be-account-create-update-5wd2v\" (UID: \"e61481b3-51b8-4663-9dbd-c2abb65388df\") " pod="openstack/placement-e8be-account-create-update-5wd2v" Feb 18 12:05:52 crc kubenswrapper[4717]: E0218 12:05:52.649270 4717 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 12:05:52 crc kubenswrapper[4717]: E0218 12:05:52.649311 4717 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 12:05:52 crc kubenswrapper[4717]: E0218 12:05:52.649380 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift podName:9102b2e6-400b-4ba1-97a2-eb5be85f778a nodeName:}" failed. No retries permitted until 2026-02-18 12:06:00.649345607 +0000 UTC m=+995.051446963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift") pod "swift-storage-0" (UID: "9102b2e6-400b-4ba1-97a2-eb5be85f778a") : configmap "swift-ring-files" not found Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.650371 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de5a667-32ab-4232-b42c-071d6a4347f9-operator-scripts\") pod \"placement-db-create-wlldn\" (UID: \"5de5a667-32ab-4232-b42c-071d6a4347f9\") " pod="openstack/placement-db-create-wlldn" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.668999 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq4rs\" (UniqueName: \"kubernetes.io/projected/5de5a667-32ab-4232-b42c-071d6a4347f9-kube-api-access-fq4rs\") pod \"placement-db-create-wlldn\" (UID: \"5de5a667-32ab-4232-b42c-071d6a4347f9\") " pod="openstack/placement-db-create-wlldn" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.733038 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-af1f-account-create-update-d8v98" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.751693 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e61481b3-51b8-4663-9dbd-c2abb65388df-operator-scripts\") pod \"placement-e8be-account-create-update-5wd2v\" (UID: \"e61481b3-51b8-4663-9dbd-c2abb65388df\") " pod="openstack/placement-e8be-account-create-update-5wd2v" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.751782 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv8lk\" (UniqueName: \"kubernetes.io/projected/e61481b3-51b8-4663-9dbd-c2abb65388df-kube-api-access-xv8lk\") pod \"placement-e8be-account-create-update-5wd2v\" (UID: \"e61481b3-51b8-4663-9dbd-c2abb65388df\") " pod="openstack/placement-e8be-account-create-update-5wd2v" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.752629 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e61481b3-51b8-4663-9dbd-c2abb65388df-operator-scripts\") pod \"placement-e8be-account-create-update-5wd2v\" (UID: \"e61481b3-51b8-4663-9dbd-c2abb65388df\") " pod="openstack/placement-e8be-account-create-update-5wd2v" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.761420 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wlldn" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.773653 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv8lk\" (UniqueName: \"kubernetes.io/projected/e61481b3-51b8-4663-9dbd-c2abb65388df-kube-api-access-xv8lk\") pod \"placement-e8be-account-create-update-5wd2v\" (UID: \"e61481b3-51b8-4663-9dbd-c2abb65388df\") " pod="openstack/placement-e8be-account-create-update-5wd2v" Feb 18 12:05:52 crc kubenswrapper[4717]: I0218 12:05:52.868845 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e8be-account-create-update-5wd2v" Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.053796 4717 generic.go:334] "Generic (PLEG): container finished" podID="c7550ba5-c203-4016-90a7-4273e4a8688a" containerID="7232abb79b1dbe6b75784cf0943b334996fe7dfd0fefc377baa662a9954d518a" exitCode=0 Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.056249 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6995-account-create-update-gv7nt" event={"ID":"c7550ba5-c203-4016-90a7-4273e4a8688a","Type":"ContainerDied","Data":"7232abb79b1dbe6b75784cf0943b334996fe7dfd0fefc377baa662a9954d518a"} Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.056444 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6995-account-create-update-gv7nt" event={"ID":"c7550ba5-c203-4016-90a7-4273e4a8688a","Type":"ContainerStarted","Data":"4dc636654446e8d9c13af67be39d012f939e0c4dbf2ff84873c11fd0c2d4fd27"} Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.059094 4717 generic.go:334] "Generic (PLEG): container finished" podID="14958cee-73ca-44f3-a12e-b505644a4429" containerID="ef6cf167fb2f456d8de1104da4cdd2fef604b01687499a166c42ad4a1a0e5593" exitCode=0 Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.059380 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4rv27" event={"ID":"14958cee-73ca-44f3-a12e-b505644a4429","Type":"ContainerDied","Data":"ef6cf167fb2f456d8de1104da4cdd2fef604b01687499a166c42ad4a1a0e5593"} Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.059423 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4rv27" event={"ID":"14958cee-73ca-44f3-a12e-b505644a4429","Type":"ContainerStarted","Data":"778af7dcca7f33034dc3b0466ea1d506096153892b05cb99eb57b28573f18162"} Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.099696 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-gfpjb"] Feb 18 12:05:53 crc kubenswrapper[4717]: W0218 12:05:53.112619 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb10cc0ac_d3a2_4723_bbb3_772cc5327d2e.slice/crio-c7c2cee53d73b4e6fbac42fc219555df2f0d2923f4e83cdab964890cda5491ba WatchSource:0}: Error finding container c7c2cee53d73b4e6fbac42fc219555df2f0d2923f4e83cdab964890cda5491ba: Status 404 returned error can't find the container with id c7c2cee53d73b4e6fbac42fc219555df2f0d2923f4e83cdab964890cda5491ba Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.346223 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wlldn"] Feb 18 12:05:53 crc kubenswrapper[4717]: W0218 12:05:53.497997 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe423299_da9c_4578_90bc_9c6e13b7acf6.slice/crio-6c973e1ca46b23e994ddbc4170eed0ac3d26b19bc7f8c3875155dad3ff3216a9 WatchSource:0}: Error finding container 6c973e1ca46b23e994ddbc4170eed0ac3d26b19bc7f8c3875155dad3ff3216a9: Status 404 returned error can't find the container with id 6c973e1ca46b23e994ddbc4170eed0ac3d26b19bc7f8c3875155dad3ff3216a9 Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.498799 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-af1f-account-create-update-d8v98"] Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.606356 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e8be-account-create-update-5wd2v"] Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.612699 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s47d9" Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.768222 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-operator-scripts\") pod \"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27\" (UID: \"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27\") " Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.768344 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnfwq\" (UniqueName: \"kubernetes.io/projected/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-kube-api-access-pnfwq\") pod \"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27\" (UID: \"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27\") " Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.769151 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8e0ed9f-aac4-4bb8-8659-c3f71eddce27" (UID: "b8e0ed9f-aac4-4bb8-8659-c3f71eddce27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.783591 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-kube-api-access-pnfwq" (OuterVolumeSpecName: "kube-api-access-pnfwq") pod "b8e0ed9f-aac4-4bb8-8659-c3f71eddce27" (UID: "b8e0ed9f-aac4-4bb8-8659-c3f71eddce27"). InnerVolumeSpecName "kube-api-access-pnfwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.871156 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:53 crc kubenswrapper[4717]: I0218 12:05:53.871409 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnfwq\" (UniqueName: \"kubernetes.io/projected/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27-kube-api-access-pnfwq\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.023787 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.069533 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s47d9" Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.069725 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s47d9" event={"ID":"b8e0ed9f-aac4-4bb8-8659-c3f71eddce27","Type":"ContainerDied","Data":"2756cbfd85648c22c6a5eb498333b3ca4f908ccd30585bc506fca0e6db097dbb"} Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.074904 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2756cbfd85648c22c6a5eb498333b3ca4f908ccd30585bc506fca0e6db097dbb" Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.076224 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e8be-account-create-update-5wd2v" event={"ID":"e61481b3-51b8-4663-9dbd-c2abb65388df","Type":"ContainerStarted","Data":"135bf50b3e5d96b318dfaa92d07cc1dcdcca43ed306a5e248818cc6cd639b069"} Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.076413 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e8be-account-create-update-5wd2v" event={"ID":"e61481b3-51b8-4663-9dbd-c2abb65388df","Type":"ContainerStarted","Data":"af7c405042a2440ec9e8a6ee254c626a8365013024bbee33038acd1d41e1f20a"} Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.080609 4717 generic.go:334] "Generic (PLEG): container finished" podID="b10cc0ac-d3a2-4723-bbb3-772cc5327d2e" containerID="6376b2852ffa9a146954f41dfedd329fe54eda6e28a1dc1fe40261d14071c0c2" exitCode=0 Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.080703 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gfpjb" event={"ID":"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e","Type":"ContainerDied","Data":"6376b2852ffa9a146954f41dfedd329fe54eda6e28a1dc1fe40261d14071c0c2"} Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.080735 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gfpjb" event={"ID":"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e","Type":"ContainerStarted","Data":"c7c2cee53d73b4e6fbac42fc219555df2f0d2923f4e83cdab964890cda5491ba"} Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.083042 4717 generic.go:334] "Generic (PLEG): container finished" podID="fe423299-da9c-4578-90bc-9c6e13b7acf6" containerID="7c7d0e94f0915655531213795b3f4b9361142c6608e4ce263861d7911504a919" exitCode=0 Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.083100 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-af1f-account-create-update-d8v98" event={"ID":"fe423299-da9c-4578-90bc-9c6e13b7acf6","Type":"ContainerDied","Data":"7c7d0e94f0915655531213795b3f4b9361142c6608e4ce263861d7911504a919"} Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.083119 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-af1f-account-create-update-d8v98" event={"ID":"fe423299-da9c-4578-90bc-9c6e13b7acf6","Type":"ContainerStarted","Data":"6c973e1ca46b23e994ddbc4170eed0ac3d26b19bc7f8c3875155dad3ff3216a9"} Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.091960 4717 generic.go:334] "Generic (PLEG): container finished" podID="5de5a667-32ab-4232-b42c-071d6a4347f9" containerID="aaa840f431bc69c452df70f611906d35f757eacbd3c1a0064370de3f7de17158" exitCode=0 Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.092236 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wlldn" event={"ID":"5de5a667-32ab-4232-b42c-071d6a4347f9","Type":"ContainerDied","Data":"aaa840f431bc69c452df70f611906d35f757eacbd3c1a0064370de3f7de17158"} Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.092283 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wlldn" event={"ID":"5de5a667-32ab-4232-b42c-071d6a4347f9","Type":"ContainerStarted","Data":"c3af719beabc325d362e27e40ed327c8c259b6850e4b47a41159c72a5b9ba049"} Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.113705 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d96db"] Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.113970 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" podUID="09838886-b5e2-4ae4-9673-2dc5f4d02da3" containerName="dnsmasq-dns" containerID="cri-o://9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da" gracePeriod=10 Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.118048 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-e8be-account-create-update-5wd2v" podStartSLOduration=2.118017991 podStartE2EDuration="2.118017991s" podCreationTimestamp="2026-02-18 12:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:05:54.103751594 +0000 UTC m=+988.505852910" watchObservedRunningTime="2026-02-18 12:05:54.118017991 +0000 UTC m=+988.520119307" Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.706582 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6995-account-create-update-gv7nt" Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.852940 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4rv27" Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.900145 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7550ba5-c203-4016-90a7-4273e4a8688a-operator-scripts\") pod \"c7550ba5-c203-4016-90a7-4273e4a8688a\" (UID: \"c7550ba5-c203-4016-90a7-4273e4a8688a\") " Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.900192 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlb6s\" (UniqueName: \"kubernetes.io/projected/c7550ba5-c203-4016-90a7-4273e4a8688a-kube-api-access-vlb6s\") pod \"c7550ba5-c203-4016-90a7-4273e4a8688a\" (UID: \"c7550ba5-c203-4016-90a7-4273e4a8688a\") " Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.901277 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7550ba5-c203-4016-90a7-4273e4a8688a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7550ba5-c203-4016-90a7-4273e4a8688a" (UID: "c7550ba5-c203-4016-90a7-4273e4a8688a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:54 crc kubenswrapper[4717]: I0218 12:05:54.907946 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7550ba5-c203-4016-90a7-4273e4a8688a-kube-api-access-vlb6s" (OuterVolumeSpecName: "kube-api-access-vlb6s") pod "c7550ba5-c203-4016-90a7-4273e4a8688a" (UID: "c7550ba5-c203-4016-90a7-4273e4a8688a"). InnerVolumeSpecName "kube-api-access-vlb6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.001158 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.001774 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcwtm\" (UniqueName: \"kubernetes.io/projected/14958cee-73ca-44f3-a12e-b505644a4429-kube-api-access-vcwtm\") pod \"14958cee-73ca-44f3-a12e-b505644a4429\" (UID: \"14958cee-73ca-44f3-a12e-b505644a4429\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.001861 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14958cee-73ca-44f3-a12e-b505644a4429-operator-scripts\") pod \"14958cee-73ca-44f3-a12e-b505644a4429\" (UID: \"14958cee-73ca-44f3-a12e-b505644a4429\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.002523 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7550ba5-c203-4016-90a7-4273e4a8688a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.002550 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlb6s\" (UniqueName: \"kubernetes.io/projected/c7550ba5-c203-4016-90a7-4273e4a8688a-kube-api-access-vlb6s\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.002849 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14958cee-73ca-44f3-a12e-b505644a4429-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14958cee-73ca-44f3-a12e-b505644a4429" (UID: "14958cee-73ca-44f3-a12e-b505644a4429"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.005249 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14958cee-73ca-44f3-a12e-b505644a4429-kube-api-access-vcwtm" (OuterVolumeSpecName: "kube-api-access-vcwtm") pod "14958cee-73ca-44f3-a12e-b505644a4429" (UID: "14958cee-73ca-44f3-a12e-b505644a4429"). InnerVolumeSpecName "kube-api-access-vcwtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.103369 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzg8s\" (UniqueName: \"kubernetes.io/projected/09838886-b5e2-4ae4-9673-2dc5f4d02da3-kube-api-access-xzg8s\") pod \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.103485 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-config\") pod \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.103618 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-sb\") pod \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.103654 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-dns-svc\") pod \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.103758 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-nb\") pod \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\" (UID: \"09838886-b5e2-4ae4-9673-2dc5f4d02da3\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.104233 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcwtm\" (UniqueName: \"kubernetes.io/projected/14958cee-73ca-44f3-a12e-b505644a4429-kube-api-access-vcwtm\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.104251 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14958cee-73ca-44f3-a12e-b505644a4429-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.130814 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09838886-b5e2-4ae4-9673-2dc5f4d02da3-kube-api-access-xzg8s" (OuterVolumeSpecName: "kube-api-access-xzg8s") pod "09838886-b5e2-4ae4-9673-2dc5f4d02da3" (UID: "09838886-b5e2-4ae4-9673-2dc5f4d02da3"). InnerVolumeSpecName "kube-api-access-xzg8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.146999 4717 generic.go:334] "Generic (PLEG): container finished" podID="e61481b3-51b8-4663-9dbd-c2abb65388df" containerID="135bf50b3e5d96b318dfaa92d07cc1dcdcca43ed306a5e248818cc6cd639b069" exitCode=0 Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.147112 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e8be-account-create-update-5wd2v" event={"ID":"e61481b3-51b8-4663-9dbd-c2abb65388df","Type":"ContainerDied","Data":"135bf50b3e5d96b318dfaa92d07cc1dcdcca43ed306a5e248818cc6cd639b069"} Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.149789 4717 generic.go:334] "Generic (PLEG): container finished" podID="09838886-b5e2-4ae4-9673-2dc5f4d02da3" containerID="9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da" exitCode=0 Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.149858 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.149867 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" event={"ID":"09838886-b5e2-4ae4-9673-2dc5f4d02da3","Type":"ContainerDied","Data":"9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da"} Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.149898 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-d96db" event={"ID":"09838886-b5e2-4ae4-9673-2dc5f4d02da3","Type":"ContainerDied","Data":"19af81836c8ea4506643cf50ab6e90b9d39599246974bdbef8c7a5ff630cc8a9"} Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.149918 4717 scope.go:117] "RemoveContainer" containerID="9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.153393 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6995-account-create-update-gv7nt" event={"ID":"c7550ba5-c203-4016-90a7-4273e4a8688a","Type":"ContainerDied","Data":"4dc636654446e8d9c13af67be39d012f939e0c4dbf2ff84873c11fd0c2d4fd27"} Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.153440 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dc636654446e8d9c13af67be39d012f939e0c4dbf2ff84873c11fd0c2d4fd27" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.153506 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6995-account-create-update-gv7nt" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.163861 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4rv27" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.164668 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4rv27" event={"ID":"14958cee-73ca-44f3-a12e-b505644a4429","Type":"ContainerDied","Data":"778af7dcca7f33034dc3b0466ea1d506096153892b05cb99eb57b28573f18162"} Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.164721 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="778af7dcca7f33034dc3b0466ea1d506096153892b05cb99eb57b28573f18162" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.173171 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09838886-b5e2-4ae4-9673-2dc5f4d02da3" (UID: "09838886-b5e2-4ae4-9673-2dc5f4d02da3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.190218 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09838886-b5e2-4ae4-9673-2dc5f4d02da3" (UID: "09838886-b5e2-4ae4-9673-2dc5f4d02da3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.190565 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-config" (OuterVolumeSpecName: "config") pod "09838886-b5e2-4ae4-9673-2dc5f4d02da3" (UID: "09838886-b5e2-4ae4-9673-2dc5f4d02da3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.193508 4717 scope.go:117] "RemoveContainer" containerID="1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.198413 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09838886-b5e2-4ae4-9673-2dc5f4d02da3" (UID: "09838886-b5e2-4ae4-9673-2dc5f4d02da3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.206235 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.206316 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzg8s\" (UniqueName: \"kubernetes.io/projected/09838886-b5e2-4ae4-9673-2dc5f4d02da3-kube-api-access-xzg8s\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.206328 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.206338 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.206347 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09838886-b5e2-4ae4-9673-2dc5f4d02da3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.223210 4717 scope.go:117] "RemoveContainer" containerID="9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da" Feb 18 12:05:55 crc kubenswrapper[4717]: E0218 12:05:55.223801 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da\": container with ID starting with 9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da not found: ID does not exist" containerID="9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.223840 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da"} err="failed to get container status \"9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da\": rpc error: code = NotFound desc = could not find container \"9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da\": container with ID starting with 9d5569f2af736eb8dade2391851d90ec1afb46a7e6f1b26a1e6aed404d34e0da not found: ID does not exist" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.223865 4717 scope.go:117] "RemoveContainer" containerID="1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864" Feb 18 12:05:55 crc kubenswrapper[4717]: E0218 12:05:55.224160 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864\": container with ID starting with 1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864 not found: ID does not exist" containerID="1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.224211 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864"} err="failed to get container status \"1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864\": rpc error: code = NotFound desc = could not find container \"1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864\": container with ID starting with 1b1e487348ae75c0b21ea562f9cdf75a8e261d4b2a2b1173852721dfd8344864 not found: ID does not exist" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.472693 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gfpjb" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.493349 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d96db"] Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.498972 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-d96db"] Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.613078 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7wcf\" (UniqueName: \"kubernetes.io/projected/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-kube-api-access-c7wcf\") pod \"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e\" (UID: \"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.613551 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-operator-scripts\") pod \"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e\" (UID: \"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.615306 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b10cc0ac-d3a2-4723-bbb3-772cc5327d2e" (UID: "b10cc0ac-d3a2-4723-bbb3-772cc5327d2e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.625111 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-kube-api-access-c7wcf" (OuterVolumeSpecName: "kube-api-access-c7wcf") pod "b10cc0ac-d3a2-4723-bbb3-772cc5327d2e" (UID: "b10cc0ac-d3a2-4723-bbb3-772cc5327d2e"). InnerVolumeSpecName "kube-api-access-c7wcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.716118 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7wcf\" (UniqueName: \"kubernetes.io/projected/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-kube-api-access-c7wcf\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.716172 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.723247 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-af1f-account-create-update-d8v98" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.757150 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wlldn" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.918690 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbgkz\" (UniqueName: \"kubernetes.io/projected/fe423299-da9c-4578-90bc-9c6e13b7acf6-kube-api-access-kbgkz\") pod \"fe423299-da9c-4578-90bc-9c6e13b7acf6\" (UID: \"fe423299-da9c-4578-90bc-9c6e13b7acf6\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.918782 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq4rs\" (UniqueName: \"kubernetes.io/projected/5de5a667-32ab-4232-b42c-071d6a4347f9-kube-api-access-fq4rs\") pod \"5de5a667-32ab-4232-b42c-071d6a4347f9\" (UID: \"5de5a667-32ab-4232-b42c-071d6a4347f9\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.918906 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de5a667-32ab-4232-b42c-071d6a4347f9-operator-scripts\") pod \"5de5a667-32ab-4232-b42c-071d6a4347f9\" (UID: \"5de5a667-32ab-4232-b42c-071d6a4347f9\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.918934 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe423299-da9c-4578-90bc-9c6e13b7acf6-operator-scripts\") pod \"fe423299-da9c-4578-90bc-9c6e13b7acf6\" (UID: \"fe423299-da9c-4578-90bc-9c6e13b7acf6\") " Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.919604 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5de5a667-32ab-4232-b42c-071d6a4347f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5de5a667-32ab-4232-b42c-071d6a4347f9" (UID: "5de5a667-32ab-4232-b42c-071d6a4347f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.919731 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe423299-da9c-4578-90bc-9c6e13b7acf6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe423299-da9c-4578-90bc-9c6e13b7acf6" (UID: "fe423299-da9c-4578-90bc-9c6e13b7acf6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.921904 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de5a667-32ab-4232-b42c-071d6a4347f9-kube-api-access-fq4rs" (OuterVolumeSpecName: "kube-api-access-fq4rs") pod "5de5a667-32ab-4232-b42c-071d6a4347f9" (UID: "5de5a667-32ab-4232-b42c-071d6a4347f9"). InnerVolumeSpecName "kube-api-access-fq4rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:55 crc kubenswrapper[4717]: I0218 12:05:55.922369 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe423299-da9c-4578-90bc-9c6e13b7acf6-kube-api-access-kbgkz" (OuterVolumeSpecName: "kube-api-access-kbgkz") pod "fe423299-da9c-4578-90bc-9c6e13b7acf6" (UID: "fe423299-da9c-4578-90bc-9c6e13b7acf6"). InnerVolumeSpecName "kube-api-access-kbgkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.021610 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbgkz\" (UniqueName: \"kubernetes.io/projected/fe423299-da9c-4578-90bc-9c6e13b7acf6-kube-api-access-kbgkz\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.021649 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq4rs\" (UniqueName: \"kubernetes.io/projected/5de5a667-32ab-4232-b42c-071d6a4347f9-kube-api-access-fq4rs\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.021659 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de5a667-32ab-4232-b42c-071d6a4347f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.021670 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe423299-da9c-4578-90bc-9c6e13b7acf6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.173595 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-af1f-account-create-update-d8v98" event={"ID":"fe423299-da9c-4578-90bc-9c6e13b7acf6","Type":"ContainerDied","Data":"6c973e1ca46b23e994ddbc4170eed0ac3d26b19bc7f8c3875155dad3ff3216a9"} Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.173653 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c973e1ca46b23e994ddbc4170eed0ac3d26b19bc7f8c3875155dad3ff3216a9" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.173627 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-af1f-account-create-update-d8v98" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.175007 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wlldn" event={"ID":"5de5a667-32ab-4232-b42c-071d6a4347f9","Type":"ContainerDied","Data":"c3af719beabc325d362e27e40ed327c8c259b6850e4b47a41159c72a5b9ba049"} Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.175036 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wlldn" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.175057 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3af719beabc325d362e27e40ed327c8c259b6850e4b47a41159c72a5b9ba049" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.176354 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-gfpjb" event={"ID":"b10cc0ac-d3a2-4723-bbb3-772cc5327d2e","Type":"ContainerDied","Data":"c7c2cee53d73b4e6fbac42fc219555df2f0d2923f4e83cdab964890cda5491ba"} Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.176382 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7c2cee53d73b4e6fbac42fc219555df2f0d2923f4e83cdab964890cda5491ba" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.176410 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-gfpjb" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.477192 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-h7p5l"] Feb 18 12:05:56 crc kubenswrapper[4717]: E0218 12:05:56.477728 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e0ed9f-aac4-4bb8-8659-c3f71eddce27" containerName="mariadb-account-create-update" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.477751 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e0ed9f-aac4-4bb8-8659-c3f71eddce27" containerName="mariadb-account-create-update" Feb 18 12:05:56 crc kubenswrapper[4717]: E0218 12:05:56.477795 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09838886-b5e2-4ae4-9673-2dc5f4d02da3" containerName="dnsmasq-dns" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.477803 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="09838886-b5e2-4ae4-9673-2dc5f4d02da3" containerName="dnsmasq-dns" Feb 18 12:05:56 crc kubenswrapper[4717]: E0218 12:05:56.477818 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10cc0ac-d3a2-4723-bbb3-772cc5327d2e" containerName="mariadb-database-create" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.477826 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10cc0ac-d3a2-4723-bbb3-772cc5327d2e" containerName="mariadb-database-create" Feb 18 12:05:56 crc kubenswrapper[4717]: E0218 12:05:56.477838 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7550ba5-c203-4016-90a7-4273e4a8688a" containerName="mariadb-account-create-update" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.477846 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7550ba5-c203-4016-90a7-4273e4a8688a" containerName="mariadb-account-create-update" Feb 18 12:05:56 crc kubenswrapper[4717]: E0218 12:05:56.477859 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de5a667-32ab-4232-b42c-071d6a4347f9" containerName="mariadb-database-create" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.477867 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de5a667-32ab-4232-b42c-071d6a4347f9" containerName="mariadb-database-create" Feb 18 12:05:56 crc kubenswrapper[4717]: E0218 12:05:56.477880 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14958cee-73ca-44f3-a12e-b505644a4429" containerName="mariadb-database-create" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.477888 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="14958cee-73ca-44f3-a12e-b505644a4429" containerName="mariadb-database-create" Feb 18 12:05:56 crc kubenswrapper[4717]: E0218 12:05:56.477899 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09838886-b5e2-4ae4-9673-2dc5f4d02da3" containerName="init" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.477907 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="09838886-b5e2-4ae4-9673-2dc5f4d02da3" containerName="init" Feb 18 12:05:56 crc kubenswrapper[4717]: E0218 12:05:56.477924 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe423299-da9c-4578-90bc-9c6e13b7acf6" containerName="mariadb-account-create-update" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.477932 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe423299-da9c-4578-90bc-9c6e13b7acf6" containerName="mariadb-account-create-update" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.478131 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e0ed9f-aac4-4bb8-8659-c3f71eddce27" containerName="mariadb-account-create-update" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.478157 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="14958cee-73ca-44f3-a12e-b505644a4429" containerName="mariadb-database-create" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.478166 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7550ba5-c203-4016-90a7-4273e4a8688a" containerName="mariadb-account-create-update" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.478196 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de5a667-32ab-4232-b42c-071d6a4347f9" containerName="mariadb-database-create" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.478209 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10cc0ac-d3a2-4723-bbb3-772cc5327d2e" containerName="mariadb-database-create" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.478222 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="09838886-b5e2-4ae4-9673-2dc5f4d02da3" containerName="dnsmasq-dns" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.478232 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe423299-da9c-4578-90bc-9c6e13b7acf6" containerName="mariadb-account-create-update" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.478851 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.482587 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-c7wn5" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.494177 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.497024 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-h7p5l"] Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.545564 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e8be-account-create-update-5wd2v" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.633403 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ztqc\" (UniqueName: \"kubernetes.io/projected/8431e64c-4bcb-4dce-a7bb-123b54445b08-kube-api-access-7ztqc\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.633492 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-db-sync-config-data\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.633536 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-config-data\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.633571 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-combined-ca-bundle\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.734487 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv8lk\" (UniqueName: \"kubernetes.io/projected/e61481b3-51b8-4663-9dbd-c2abb65388df-kube-api-access-xv8lk\") pod \"e61481b3-51b8-4663-9dbd-c2abb65388df\" (UID: \"e61481b3-51b8-4663-9dbd-c2abb65388df\") " Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.734574 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e61481b3-51b8-4663-9dbd-c2abb65388df-operator-scripts\") pod \"e61481b3-51b8-4663-9dbd-c2abb65388df\" (UID: \"e61481b3-51b8-4663-9dbd-c2abb65388df\") " Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.735110 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ztqc\" (UniqueName: \"kubernetes.io/projected/8431e64c-4bcb-4dce-a7bb-123b54445b08-kube-api-access-7ztqc\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.735201 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-db-sync-config-data\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.735240 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-config-data\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.735308 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-combined-ca-bundle\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.736205 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61481b3-51b8-4663-9dbd-c2abb65388df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e61481b3-51b8-4663-9dbd-c2abb65388df" (UID: "e61481b3-51b8-4663-9dbd-c2abb65388df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.741199 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-config-data\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.741236 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-combined-ca-bundle\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.742553 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61481b3-51b8-4663-9dbd-c2abb65388df-kube-api-access-xv8lk" (OuterVolumeSpecName: "kube-api-access-xv8lk") pod "e61481b3-51b8-4663-9dbd-c2abb65388df" (UID: "e61481b3-51b8-4663-9dbd-c2abb65388df"). InnerVolumeSpecName "kube-api-access-xv8lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.742790 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-db-sync-config-data\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.753581 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ztqc\" (UniqueName: \"kubernetes.io/projected/8431e64c-4bcb-4dce-a7bb-123b54445b08-kube-api-access-7ztqc\") pod \"glance-db-sync-h7p5l\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.837286 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv8lk\" (UniqueName: \"kubernetes.io/projected/e61481b3-51b8-4663-9dbd-c2abb65388df-kube-api-access-xv8lk\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.837334 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e61481b3-51b8-4663-9dbd-c2abb65388df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:05:56 crc kubenswrapper[4717]: I0218 12:05:56.858544 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-h7p5l" Feb 18 12:05:57 crc kubenswrapper[4717]: I0218 12:05:57.052985 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09838886-b5e2-4ae4-9673-2dc5f4d02da3" path="/var/lib/kubelet/pods/09838886-b5e2-4ae4-9673-2dc5f4d02da3/volumes" Feb 18 12:05:57 crc kubenswrapper[4717]: I0218 12:05:57.188184 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e8be-account-create-update-5wd2v" event={"ID":"e61481b3-51b8-4663-9dbd-c2abb65388df","Type":"ContainerDied","Data":"af7c405042a2440ec9e8a6ee254c626a8365013024bbee33038acd1d41e1f20a"} Feb 18 12:05:57 crc kubenswrapper[4717]: I0218 12:05:57.188235 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af7c405042a2440ec9e8a6ee254c626a8365013024bbee33038acd1d41e1f20a" Feb 18 12:05:57 crc kubenswrapper[4717]: I0218 12:05:57.188301 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e8be-account-create-update-5wd2v" Feb 18 12:05:57 crc kubenswrapper[4717]: W0218 12:05:57.473595 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8431e64c_4bcb_4dce_a7bb_123b54445b08.slice/crio-3468d2964cf05965bd5e56e84c2d8e8827b445f74a3b4bb540bea75f395c7390 WatchSource:0}: Error finding container 3468d2964cf05965bd5e56e84c2d8e8827b445f74a3b4bb540bea75f395c7390: Status 404 returned error can't find the container with id 3468d2964cf05965bd5e56e84c2d8e8827b445f74a3b4bb540bea75f395c7390 Feb 18 12:05:57 crc kubenswrapper[4717]: I0218 12:05:57.483645 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-h7p5l"] Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.081759 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-s47d9"] Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.090101 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-s47d9"] Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.172769 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wdpkl"] Feb 18 12:05:58 crc kubenswrapper[4717]: E0218 12:05:58.173250 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61481b3-51b8-4663-9dbd-c2abb65388df" containerName="mariadb-account-create-update" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.173284 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61481b3-51b8-4663-9dbd-c2abb65388df" containerName="mariadb-account-create-update" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.173473 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61481b3-51b8-4663-9dbd-c2abb65388df" containerName="mariadb-account-create-update" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.174132 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wdpkl" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.180636 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wdpkl"] Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.195985 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.208796 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-h7p5l" event={"ID":"8431e64c-4bcb-4dce-a7bb-123b54445b08","Type":"ContainerStarted","Data":"3468d2964cf05965bd5e56e84c2d8e8827b445f74a3b4bb540bea75f395c7390"} Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.267309 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d6db735-5566-4452-adeb-88fa28f4f417-operator-scripts\") pod \"root-account-create-update-wdpkl\" (UID: \"6d6db735-5566-4452-adeb-88fa28f4f417\") " pod="openstack/root-account-create-update-wdpkl" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.267370 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d88vz\" (UniqueName: \"kubernetes.io/projected/6d6db735-5566-4452-adeb-88fa28f4f417-kube-api-access-d88vz\") pod \"root-account-create-update-wdpkl\" (UID: \"6d6db735-5566-4452-adeb-88fa28f4f417\") " pod="openstack/root-account-create-update-wdpkl" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.368428 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d88vz\" (UniqueName: \"kubernetes.io/projected/6d6db735-5566-4452-adeb-88fa28f4f417-kube-api-access-d88vz\") pod \"root-account-create-update-wdpkl\" (UID: \"6d6db735-5566-4452-adeb-88fa28f4f417\") " pod="openstack/root-account-create-update-wdpkl" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.368626 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d6db735-5566-4452-adeb-88fa28f4f417-operator-scripts\") pod \"root-account-create-update-wdpkl\" (UID: \"6d6db735-5566-4452-adeb-88fa28f4f417\") " pod="openstack/root-account-create-update-wdpkl" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.369554 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d6db735-5566-4452-adeb-88fa28f4f417-operator-scripts\") pod \"root-account-create-update-wdpkl\" (UID: \"6d6db735-5566-4452-adeb-88fa28f4f417\") " pod="openstack/root-account-create-update-wdpkl" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.390284 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d88vz\" (UniqueName: \"kubernetes.io/projected/6d6db735-5566-4452-adeb-88fa28f4f417-kube-api-access-d88vz\") pod \"root-account-create-update-wdpkl\" (UID: \"6d6db735-5566-4452-adeb-88fa28f4f417\") " pod="openstack/root-account-create-update-wdpkl" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.508879 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wdpkl" Feb 18 12:05:58 crc kubenswrapper[4717]: I0218 12:05:58.984530 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wdpkl"] Feb 18 12:05:58 crc kubenswrapper[4717]: W0218 12:05:58.985401 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d6db735_5566_4452_adeb_88fa28f4f417.slice/crio-c128b8f74cddf1667ce441330fa2b341b3d69e7edf225b261e791a56cb279c87 WatchSource:0}: Error finding container c128b8f74cddf1667ce441330fa2b341b3d69e7edf225b261e791a56cb279c87: Status 404 returned error can't find the container with id c128b8f74cddf1667ce441330fa2b341b3d69e7edf225b261e791a56cb279c87 Feb 18 12:05:59 crc kubenswrapper[4717]: I0218 12:05:59.047301 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e0ed9f-aac4-4bb8-8659-c3f71eddce27" path="/var/lib/kubelet/pods/b8e0ed9f-aac4-4bb8-8659-c3f71eddce27/volumes" Feb 18 12:05:59 crc kubenswrapper[4717]: I0218 12:05:59.219331 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wdpkl" event={"ID":"6d6db735-5566-4452-adeb-88fa28f4f417","Type":"ContainerStarted","Data":"dc96ce52a90690e2c3c36c5ef50245f2d11b62c73b9e59b4cb5ed32a99ffa2ad"} Feb 18 12:05:59 crc kubenswrapper[4717]: I0218 12:05:59.220616 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wdpkl" event={"ID":"6d6db735-5566-4452-adeb-88fa28f4f417","Type":"ContainerStarted","Data":"c128b8f74cddf1667ce441330fa2b341b3d69e7edf225b261e791a56cb279c87"} Feb 18 12:05:59 crc kubenswrapper[4717]: I0218 12:05:59.221575 4717 generic.go:334] "Generic (PLEG): container finished" podID="ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" containerID="95deee24517dbce2ac708850e533ba80de7a7bc3f03609053e9db3118ddf9f0a" exitCode=0 Feb 18 12:05:59 crc kubenswrapper[4717]: I0218 12:05:59.221632 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-66d7l" event={"ID":"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158","Type":"ContainerDied","Data":"95deee24517dbce2ac708850e533ba80de7a7bc3f03609053e9db3118ddf9f0a"} Feb 18 12:05:59 crc kubenswrapper[4717]: I0218 12:05:59.242073 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-wdpkl" podStartSLOduration=1.24204964 podStartE2EDuration="1.24204964s" podCreationTimestamp="2026-02-18 12:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:05:59.234753697 +0000 UTC m=+993.636855023" watchObservedRunningTime="2026-02-18 12:05:59.24204964 +0000 UTC m=+993.644150956" Feb 18 12:05:59 crc kubenswrapper[4717]: I0218 12:05:59.731406 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.231660 4717 generic.go:334] "Generic (PLEG): container finished" podID="6d6db735-5566-4452-adeb-88fa28f4f417" containerID="dc96ce52a90690e2c3c36c5ef50245f2d11b62c73b9e59b4cb5ed32a99ffa2ad" exitCode=0 Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.231737 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wdpkl" event={"ID":"6d6db735-5566-4452-adeb-88fa28f4f417","Type":"ContainerDied","Data":"dc96ce52a90690e2c3c36c5ef50245f2d11b62c73b9e59b4cb5ed32a99ffa2ad"} Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.575735 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.622804 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-combined-ca-bundle\") pod \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.622902 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-swiftconf\") pod \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.623001 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-ring-data-devices\") pod \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.623039 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-scripts\") pod \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.623063 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-etc-swift\") pod \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.623113 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-dispersionconf\") pod \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.623137 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8fqd\" (UniqueName: \"kubernetes.io/projected/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-kube-api-access-l8fqd\") pod \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\" (UID: \"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158\") " Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.625477 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" (UID: "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.630557 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" (UID: "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.645289 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-kube-api-access-l8fqd" (OuterVolumeSpecName: "kube-api-access-l8fqd") pod "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" (UID: "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158"). InnerVolumeSpecName "kube-api-access-l8fqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.649220 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" (UID: "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.653726 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" (UID: "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.657470 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-scripts" (OuterVolumeSpecName: "scripts") pod "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" (UID: "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.661288 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" (UID: "ac3ad8c1-04a7-46f5-9c76-98c92e3c2158"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.725035 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.725199 4717 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.725214 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8fqd\" (UniqueName: \"kubernetes.io/projected/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-kube-api-access-l8fqd\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.725225 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.725235 4717 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.725243 4717 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.725251 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.725279 4717 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ac3ad8c1-04a7-46f5-9c76-98c92e3c2158-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.730561 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9102b2e6-400b-4ba1-97a2-eb5be85f778a-etc-swift\") pod \"swift-storage-0\" (UID: \"9102b2e6-400b-4ba1-97a2-eb5be85f778a\") " pod="openstack/swift-storage-0" Feb 18 12:06:00 crc kubenswrapper[4717]: I0218 12:06:00.793389 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.246719 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-66d7l" Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.250621 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-66d7l" event={"ID":"ac3ad8c1-04a7-46f5-9c76-98c92e3c2158","Type":"ContainerDied","Data":"4546b6881b07ff3dfa1ae986f890e1a4c63e1870c7810dd5cd7c67546f809676"} Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.250663 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4546b6881b07ff3dfa1ae986f890e1a4c63e1870c7810dd5cd7c67546f809676" Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.609628 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 12:06:01 crc kubenswrapper[4717]: W0218 12:06:01.629280 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9102b2e6_400b_4ba1_97a2_eb5be85f778a.slice/crio-98f9b69a017bde86a39d52fb045e8267449034a750fd35e57e171e97246cc749 WatchSource:0}: Error finding container 98f9b69a017bde86a39d52fb045e8267449034a750fd35e57e171e97246cc749: Status 404 returned error can't find the container with id 98f9b69a017bde86a39d52fb045e8267449034a750fd35e57e171e97246cc749 Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.685742 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wdpkl" Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.760863 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d6db735-5566-4452-adeb-88fa28f4f417-operator-scripts\") pod \"6d6db735-5566-4452-adeb-88fa28f4f417\" (UID: \"6d6db735-5566-4452-adeb-88fa28f4f417\") " Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.761040 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d88vz\" (UniqueName: \"kubernetes.io/projected/6d6db735-5566-4452-adeb-88fa28f4f417-kube-api-access-d88vz\") pod \"6d6db735-5566-4452-adeb-88fa28f4f417\" (UID: \"6d6db735-5566-4452-adeb-88fa28f4f417\") " Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.762284 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d6db735-5566-4452-adeb-88fa28f4f417-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d6db735-5566-4452-adeb-88fa28f4f417" (UID: "6d6db735-5566-4452-adeb-88fa28f4f417"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.781755 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d6db735-5566-4452-adeb-88fa28f4f417-kube-api-access-d88vz" (OuterVolumeSpecName: "kube-api-access-d88vz") pod "6d6db735-5566-4452-adeb-88fa28f4f417" (UID: "6d6db735-5566-4452-adeb-88fa28f4f417"). InnerVolumeSpecName "kube-api-access-d88vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.863173 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d88vz\" (UniqueName: \"kubernetes.io/projected/6d6db735-5566-4452-adeb-88fa28f4f417-kube-api-access-d88vz\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:01 crc kubenswrapper[4717]: I0218 12:06:01.863221 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d6db735-5566-4452-adeb-88fa28f4f417-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:02 crc kubenswrapper[4717]: I0218 12:06:02.257680 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wdpkl" Feb 18 12:06:02 crc kubenswrapper[4717]: I0218 12:06:02.258757 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wdpkl" event={"ID":"6d6db735-5566-4452-adeb-88fa28f4f417","Type":"ContainerDied","Data":"c128b8f74cddf1667ce441330fa2b341b3d69e7edf225b261e791a56cb279c87"} Feb 18 12:06:02 crc kubenswrapper[4717]: I0218 12:06:02.258818 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c128b8f74cddf1667ce441330fa2b341b3d69e7edf225b261e791a56cb279c87" Feb 18 12:06:02 crc kubenswrapper[4717]: I0218 12:06:02.260332 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"98f9b69a017bde86a39d52fb045e8267449034a750fd35e57e171e97246cc749"} Feb 18 12:06:06 crc kubenswrapper[4717]: I0218 12:06:06.917351 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cqvjv" podUID="6e3a25d1-3ad3-4ecb-bca6-84643516d734" containerName="ovn-controller" probeResult="failure" output=< Feb 18 12:06:06 crc kubenswrapper[4717]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 12:06:06 crc kubenswrapper[4717]: > Feb 18 12:06:06 crc kubenswrapper[4717]: I0218 12:06:06.987054 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:06:06 crc kubenswrapper[4717]: I0218 12:06:06.993737 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zzbgg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.277015 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cqvjv-config-mrzkg"] Feb 18 12:06:07 crc kubenswrapper[4717]: E0218 12:06:07.277575 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6db735-5566-4452-adeb-88fa28f4f417" containerName="mariadb-account-create-update" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.277601 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6db735-5566-4452-adeb-88fa28f4f417" containerName="mariadb-account-create-update" Feb 18 12:06:07 crc kubenswrapper[4717]: E0218 12:06:07.277640 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" containerName="swift-ring-rebalance" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.277648 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" containerName="swift-ring-rebalance" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.277849 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6db735-5566-4452-adeb-88fa28f4f417" containerName="mariadb-account-create-update" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.277862 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3ad8c1-04a7-46f5-9c76-98c92e3c2158" containerName="swift-ring-rebalance" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.278619 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.287440 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.293380 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cqvjv-config-mrzkg"] Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.399933 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-log-ovn\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.400301 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-additional-scripts\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.400453 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.400621 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-scripts\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.400816 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run-ovn\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.400929 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2mrc\" (UniqueName: \"kubernetes.io/projected/45597084-7f68-457e-bc3c-f275f8de3524-kube-api-access-v2mrc\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.503245 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.503685 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.503737 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-scripts\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.505962 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run-ovn\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.506194 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2mrc\" (UniqueName: \"kubernetes.io/projected/45597084-7f68-457e-bc3c-f275f8de3524-kube-api-access-v2mrc\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.506482 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-log-ovn\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.506741 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-additional-scripts\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.506678 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-log-ovn\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.506218 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run-ovn\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.507509 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-scripts\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.507770 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-additional-scripts\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.529452 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2mrc\" (UniqueName: \"kubernetes.io/projected/45597084-7f68-457e-bc3c-f275f8de3524-kube-api-access-v2mrc\") pod \"ovn-controller-cqvjv-config-mrzkg\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:07 crc kubenswrapper[4717]: I0218 12:06:07.608813 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:09 crc kubenswrapper[4717]: I0218 12:06:09.326738 4717 generic.go:334] "Generic (PLEG): container finished" podID="468aa28e-8245-4024-815a-24d469dc17bf" containerID="ab59f199ed48325d66abc30818dd4af42a257558bd4238a3e87ab39177370fd9" exitCode=0 Feb 18 12:06:09 crc kubenswrapper[4717]: I0218 12:06:09.326858 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"468aa28e-8245-4024-815a-24d469dc17bf","Type":"ContainerDied","Data":"ab59f199ed48325d66abc30818dd4af42a257558bd4238a3e87ab39177370fd9"} Feb 18 12:06:11 crc kubenswrapper[4717]: I0218 12:06:11.348913 4717 generic.go:334] "Generic (PLEG): container finished" podID="636b0761-84e8-4d2f-88f4-4845e2a05f80" containerID="8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28" exitCode=0 Feb 18 12:06:11 crc kubenswrapper[4717]: I0218 12:06:11.349398 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"636b0761-84e8-4d2f-88f4-4845e2a05f80","Type":"ContainerDied","Data":"8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28"} Feb 18 12:06:11 crc kubenswrapper[4717]: I0218 12:06:11.838216 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cqvjv-config-mrzkg"] Feb 18 12:06:11 crc kubenswrapper[4717]: I0218 12:06:11.940150 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cqvjv" podUID="6e3a25d1-3ad3-4ecb-bca6-84643516d734" containerName="ovn-controller" probeResult="failure" output=< Feb 18 12:06:11 crc kubenswrapper[4717]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 12:06:11 crc kubenswrapper[4717]: > Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.375244 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"cf88f4e4ea18ad970c3752b86977869fbaa635bccc43a01ffda75dcc4f51ce77"} Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.375915 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"65fbb1e3b35ba6398081fd320710310b1cafc265119f5def776677a8a50ba655"} Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.375935 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"100fae59928a7c6bb6f6a288f65fa118b270b8fbcf5cb335626107b176d92438"} Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.389496 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"468aa28e-8245-4024-815a-24d469dc17bf","Type":"ContainerStarted","Data":"de82c264624647485061aa85113df9ce8f39d0e71721f79034675fb3dfb42eda"} Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.389889 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.398114 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"636b0761-84e8-4d2f-88f4-4845e2a05f80","Type":"ContainerStarted","Data":"e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2"} Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.399229 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.402036 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-h7p5l" event={"ID":"8431e64c-4bcb-4dce-a7bb-123b54445b08","Type":"ContainerStarted","Data":"09a8c65290545784997245bdc5cec4e001ff5c4cf0c72928463bf354a034525d"} Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.405198 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cqvjv-config-mrzkg" event={"ID":"45597084-7f68-457e-bc3c-f275f8de3524","Type":"ContainerStarted","Data":"58cfacd5ca3c9dd2b8b7382a19e739c4cc900cc443ef626eff7f217a1351736b"} Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.446223 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.417770162 podStartE2EDuration="1m16.446191088s" podCreationTimestamp="2026-02-18 12:04:56 +0000 UTC" firstStartedPulling="2026-02-18 12:04:59.000874822 +0000 UTC m=+933.402976138" lastFinishedPulling="2026-02-18 12:05:35.029295748 +0000 UTC m=+969.431397064" observedRunningTime="2026-02-18 12:06:12.43152555 +0000 UTC m=+1006.833626866" watchObservedRunningTime="2026-02-18 12:06:12.446191088 +0000 UTC m=+1006.848292404" Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.462814 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cqvjv-config-mrzkg" podStartSLOduration=5.462782989 podStartE2EDuration="5.462782989s" podCreationTimestamp="2026-02-18 12:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:12.454128578 +0000 UTC m=+1006.856229904" watchObservedRunningTime="2026-02-18 12:06:12.462782989 +0000 UTC m=+1006.864884305" Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.487929 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371960.366875 podStartE2EDuration="1m16.487900878s" podCreationTimestamp="2026-02-18 12:04:56 +0000 UTC" firstStartedPulling="2026-02-18 12:04:58.655044332 +0000 UTC m=+933.057145658" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:12.47934611 +0000 UTC m=+1006.881447416" watchObservedRunningTime="2026-02-18 12:06:12.487900878 +0000 UTC m=+1006.890002194" Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.777022 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:06:12 crc kubenswrapper[4717]: I0218 12:06:12.777638 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:06:13 crc kubenswrapper[4717]: I0218 12:06:13.444842 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"2738d1594e2ffcb6e29fca2bfa73c406bb940f9277bf4ce9dc88ba0fe0ceb44f"} Feb 18 12:06:13 crc kubenswrapper[4717]: I0218 12:06:13.446813 4717 generic.go:334] "Generic (PLEG): container finished" podID="45597084-7f68-457e-bc3c-f275f8de3524" containerID="98f37166bf20daf2e0b76522a23a1cf0833f4c51beecdb70ab24cc6b8693aba2" exitCode=0 Feb 18 12:06:13 crc kubenswrapper[4717]: I0218 12:06:13.446948 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cqvjv-config-mrzkg" event={"ID":"45597084-7f68-457e-bc3c-f275f8de3524","Type":"ContainerDied","Data":"98f37166bf20daf2e0b76522a23a1cf0833f4c51beecdb70ab24cc6b8693aba2"} Feb 18 12:06:13 crc kubenswrapper[4717]: I0218 12:06:13.475114 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-h7p5l" podStartSLOduration=3.507557986 podStartE2EDuration="17.47509085s" podCreationTimestamp="2026-02-18 12:05:56 +0000 UTC" firstStartedPulling="2026-02-18 12:05:57.475811017 +0000 UTC m=+991.877912333" lastFinishedPulling="2026-02-18 12:06:11.443343881 +0000 UTC m=+1005.845445197" observedRunningTime="2026-02-18 12:06:12.531001167 +0000 UTC m=+1006.933102483" watchObservedRunningTime="2026-02-18 12:06:13.47509085 +0000 UTC m=+1007.877192156" Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.776954 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.978060 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2mrc\" (UniqueName: \"kubernetes.io/projected/45597084-7f68-457e-bc3c-f275f8de3524-kube-api-access-v2mrc\") pod \"45597084-7f68-457e-bc3c-f275f8de3524\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.978147 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-scripts\") pod \"45597084-7f68-457e-bc3c-f275f8de3524\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.978221 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-log-ovn\") pod \"45597084-7f68-457e-bc3c-f275f8de3524\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.978355 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run-ovn\") pod \"45597084-7f68-457e-bc3c-f275f8de3524\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.978432 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run\") pod \"45597084-7f68-457e-bc3c-f275f8de3524\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.978486 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-additional-scripts\") pod \"45597084-7f68-457e-bc3c-f275f8de3524\" (UID: \"45597084-7f68-457e-bc3c-f275f8de3524\") " Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.978913 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "45597084-7f68-457e-bc3c-f275f8de3524" (UID: "45597084-7f68-457e-bc3c-f275f8de3524"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.979040 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "45597084-7f68-457e-bc3c-f275f8de3524" (UID: "45597084-7f68-457e-bc3c-f275f8de3524"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.979091 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run" (OuterVolumeSpecName: "var-run") pod "45597084-7f68-457e-bc3c-f275f8de3524" (UID: "45597084-7f68-457e-bc3c-f275f8de3524"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.980079 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "45597084-7f68-457e-bc3c-f275f8de3524" (UID: "45597084-7f68-457e-bc3c-f275f8de3524"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.986189 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-scripts" (OuterVolumeSpecName: "scripts") pod "45597084-7f68-457e-bc3c-f275f8de3524" (UID: "45597084-7f68-457e-bc3c-f275f8de3524"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.986603 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45597084-7f68-457e-bc3c-f275f8de3524-kube-api-access-v2mrc" (OuterVolumeSpecName: "kube-api-access-v2mrc") pod "45597084-7f68-457e-bc3c-f275f8de3524" (UID: "45597084-7f68-457e-bc3c-f275f8de3524"). InnerVolumeSpecName "kube-api-access-v2mrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.986803 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cqvjv-config-mrzkg"] Feb 18 12:06:14 crc kubenswrapper[4717]: I0218 12:06:14.995524 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cqvjv-config-mrzkg"] Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.050638 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45597084-7f68-457e-bc3c-f275f8de3524" path="/var/lib/kubelet/pods/45597084-7f68-457e-bc3c-f275f8de3524/volumes" Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.081114 4717 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.081158 4717 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.081174 4717 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.081187 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2mrc\" (UniqueName: \"kubernetes.io/projected/45597084-7f68-457e-bc3c-f275f8de3524-kube-api-access-v2mrc\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.081200 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45597084-7f68-457e-bc3c-f275f8de3524-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.081212 4717 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/45597084-7f68-457e-bc3c-f275f8de3524-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.468218 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"a3c99970eaf446377178e7ea33d11595e40be85d07e7530a8c1b87ab1e1855ee"} Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.468711 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"632e2cef3756ad3ddb1c730dc0a5f765f21383d4dcae132d01fffefdc20a2b13"} Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.468723 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"f436ee4666a74749af22ed2b1cc5fb039a53071f42ec513b401ccf323dadbace"} Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.468734 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"886510edbc41671b0fc3c5d56648d315a422f52bbc5863054aea9ab387f38e37"} Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.471168 4717 scope.go:117] "RemoveContainer" containerID="98f37166bf20daf2e0b76522a23a1cf0833f4c51beecdb70ab24cc6b8693aba2" Feb 18 12:06:15 crc kubenswrapper[4717]: I0218 12:06:15.471272 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cqvjv-config-mrzkg" Feb 18 12:06:16 crc kubenswrapper[4717]: I0218 12:06:16.920505 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-cqvjv" Feb 18 12:06:17 crc kubenswrapper[4717]: I0218 12:06:17.504510 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"56292fbd842e47a0c7c3b4e27de515f0040d2ff0fe929682d72725c665852318"} Feb 18 12:06:18 crc kubenswrapper[4717]: I0218 12:06:18.520633 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"e0296e260f3d8b123c909a1f0f35f70d76b5be420d0c04713fe7d9c974dc7c1f"} Feb 18 12:06:18 crc kubenswrapper[4717]: I0218 12:06:18.521061 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"388793eaa8e2465c2fd27db1cec4b503eee4cf530a99539291f84b16595d774a"} Feb 18 12:06:18 crc kubenswrapper[4717]: I0218 12:06:18.521075 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"e9766778a4995095a834ad7d0464ca7f53693bdf568d5d772818841c695c6fcc"} Feb 18 12:06:18 crc kubenswrapper[4717]: I0218 12:06:18.521085 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"e1f8f7bd4aa4544bcd238792aa072ab81e4e7bfcb168af401abb27144fcfa3f3"} Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.009780 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"fcfdcd2e1bd1816ad29f16533afd1916e9d83b52b3526d099b05eed188a1529d"} Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.010365 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9102b2e6-400b-4ba1-97a2-eb5be85f778a","Type":"ContainerStarted","Data":"dfdb365ae28fdc8c3f8eab4c172e9ba64efb0b4355b4806580e339f47f3fb409"} Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.066380 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.46223307 podStartE2EDuration="38.066356582s" podCreationTimestamp="2026-02-18 12:05:43 +0000 UTC" firstStartedPulling="2026-02-18 12:06:01.632342762 +0000 UTC m=+996.034444078" lastFinishedPulling="2026-02-18 12:06:17.236466274 +0000 UTC m=+1011.638567590" observedRunningTime="2026-02-18 12:06:21.052528777 +0000 UTC m=+1015.454630113" watchObservedRunningTime="2026-02-18 12:06:21.066356582 +0000 UTC m=+1015.468457898" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.380481 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bhxlz"] Feb 18 12:06:21 crc kubenswrapper[4717]: E0218 12:06:21.380987 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45597084-7f68-457e-bc3c-f275f8de3524" containerName="ovn-config" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.381008 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="45597084-7f68-457e-bc3c-f275f8de3524" containerName="ovn-config" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.381278 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="45597084-7f68-457e-bc3c-f275f8de3524" containerName="ovn-config" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.382416 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.384703 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.394113 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.394244 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.394322 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.394447 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-config\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.394593 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kqpz\" (UniqueName: \"kubernetes.io/projected/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-kube-api-access-7kqpz\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.394766 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.401013 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bhxlz"] Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.497034 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.497201 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.497337 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.497404 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.497463 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-config\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.497536 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kqpz\" (UniqueName: \"kubernetes.io/projected/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-kube-api-access-7kqpz\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.498482 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.498509 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.498482 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-config\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.501121 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.501418 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.523094 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kqpz\" (UniqueName: \"kubernetes.io/projected/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-kube-api-access-7kqpz\") pod \"dnsmasq-dns-764c5664d7-bhxlz\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:21 crc kubenswrapper[4717]: I0218 12:06:21.750901 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:22 crc kubenswrapper[4717]: I0218 12:06:22.251059 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bhxlz"] Feb 18 12:06:22 crc kubenswrapper[4717]: W0218 12:06:22.259135 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod970a5ee9_ea1f_46f2_b20e_5311d2839d6d.slice/crio-11eff42ff1d1ad310ae23d85ec526562fca6dc7c1267bee9229186909eb4a775 WatchSource:0}: Error finding container 11eff42ff1d1ad310ae23d85ec526562fca6dc7c1267bee9229186909eb4a775: Status 404 returned error can't find the container with id 11eff42ff1d1ad310ae23d85ec526562fca6dc7c1267bee9229186909eb4a775 Feb 18 12:06:23 crc kubenswrapper[4717]: I0218 12:06:23.030390 4717 generic.go:334] "Generic (PLEG): container finished" podID="970a5ee9-ea1f-46f2-b20e-5311d2839d6d" containerID="1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860" exitCode=0 Feb 18 12:06:23 crc kubenswrapper[4717]: I0218 12:06:23.030497 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" event={"ID":"970a5ee9-ea1f-46f2-b20e-5311d2839d6d","Type":"ContainerDied","Data":"1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860"} Feb 18 12:06:23 crc kubenswrapper[4717]: I0218 12:06:23.031076 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" event={"ID":"970a5ee9-ea1f-46f2-b20e-5311d2839d6d","Type":"ContainerStarted","Data":"11eff42ff1d1ad310ae23d85ec526562fca6dc7c1267bee9229186909eb4a775"} Feb 18 12:06:23 crc kubenswrapper[4717]: I0218 12:06:23.042192 4717 generic.go:334] "Generic (PLEG): container finished" podID="8431e64c-4bcb-4dce-a7bb-123b54445b08" containerID="09a8c65290545784997245bdc5cec4e001ff5c4cf0c72928463bf354a034525d" exitCode=0 Feb 18 12:06:23 crc kubenswrapper[4717]: I0218 12:06:23.053466 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-h7p5l" event={"ID":"8431e64c-4bcb-4dce-a7bb-123b54445b08","Type":"ContainerDied","Data":"09a8c65290545784997245bdc5cec4e001ff5c4cf0c72928463bf354a034525d"} Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.054919 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" event={"ID":"970a5ee9-ea1f-46f2-b20e-5311d2839d6d","Type":"ContainerStarted","Data":"14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583"} Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.085383 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" podStartSLOduration=3.085353023 podStartE2EDuration="3.085353023s" podCreationTimestamp="2026-02-18 12:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:24.077581587 +0000 UTC m=+1018.479682903" watchObservedRunningTime="2026-02-18 12:06:24.085353023 +0000 UTC m=+1018.487454349" Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.695893 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-h7p5l" Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.784808 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-config-data\") pod \"8431e64c-4bcb-4dce-a7bb-123b54445b08\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.784904 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-combined-ca-bundle\") pod \"8431e64c-4bcb-4dce-a7bb-123b54445b08\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.785028 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-db-sync-config-data\") pod \"8431e64c-4bcb-4dce-a7bb-123b54445b08\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.785173 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ztqc\" (UniqueName: \"kubernetes.io/projected/8431e64c-4bcb-4dce-a7bb-123b54445b08-kube-api-access-7ztqc\") pod \"8431e64c-4bcb-4dce-a7bb-123b54445b08\" (UID: \"8431e64c-4bcb-4dce-a7bb-123b54445b08\") " Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.790580 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8431e64c-4bcb-4dce-a7bb-123b54445b08" (UID: "8431e64c-4bcb-4dce-a7bb-123b54445b08"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.790916 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8431e64c-4bcb-4dce-a7bb-123b54445b08-kube-api-access-7ztqc" (OuterVolumeSpecName: "kube-api-access-7ztqc") pod "8431e64c-4bcb-4dce-a7bb-123b54445b08" (UID: "8431e64c-4bcb-4dce-a7bb-123b54445b08"). InnerVolumeSpecName "kube-api-access-7ztqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.814166 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8431e64c-4bcb-4dce-a7bb-123b54445b08" (UID: "8431e64c-4bcb-4dce-a7bb-123b54445b08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.828372 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-config-data" (OuterVolumeSpecName: "config-data") pod "8431e64c-4bcb-4dce-a7bb-123b54445b08" (UID: "8431e64c-4bcb-4dce-a7bb-123b54445b08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.887867 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ztqc\" (UniqueName: \"kubernetes.io/projected/8431e64c-4bcb-4dce-a7bb-123b54445b08-kube-api-access-7ztqc\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.887935 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.887951 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:24 crc kubenswrapper[4717]: I0218 12:06:24.887963 4717 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8431e64c-4bcb-4dce-a7bb-123b54445b08-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.069270 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-h7p5l" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.069397 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-h7p5l" event={"ID":"8431e64c-4bcb-4dce-a7bb-123b54445b08","Type":"ContainerDied","Data":"3468d2964cf05965bd5e56e84c2d8e8827b445f74a3b4bb540bea75f395c7390"} Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.069454 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3468d2964cf05965bd5e56e84c2d8e8827b445f74a3b4bb540bea75f395c7390" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.069485 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.527776 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bhxlz"] Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.557392 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rsnmh"] Feb 18 12:06:25 crc kubenswrapper[4717]: E0218 12:06:25.557813 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8431e64c-4bcb-4dce-a7bb-123b54445b08" containerName="glance-db-sync" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.557830 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="8431e64c-4bcb-4dce-a7bb-123b54445b08" containerName="glance-db-sync" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.557997 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="8431e64c-4bcb-4dce-a7bb-123b54445b08" containerName="glance-db-sync" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.558938 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.587997 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rsnmh"] Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.601213 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vdxh\" (UniqueName: \"kubernetes.io/projected/7dd55c20-99bd-40db-add9-cea3d9b7221f-kube-api-access-4vdxh\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.601294 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.601339 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-config\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.601411 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.601445 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.601486 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.703300 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.703423 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.703477 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.703521 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vdxh\" (UniqueName: \"kubernetes.io/projected/7dd55c20-99bd-40db-add9-cea3d9b7221f-kube-api-access-4vdxh\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.703554 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.703609 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-config\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.704589 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.704868 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.704986 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.705062 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-config\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.705899 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.733100 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vdxh\" (UniqueName: \"kubernetes.io/projected/7dd55c20-99bd-40db-add9-cea3d9b7221f-kube-api-access-4vdxh\") pod \"dnsmasq-dns-74f6bcbc87-rsnmh\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:25 crc kubenswrapper[4717]: I0218 12:06:25.878840 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:26 crc kubenswrapper[4717]: I0218 12:06:26.383229 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rsnmh"] Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.089340 4717 generic.go:334] "Generic (PLEG): container finished" podID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerID="6c97051ccc0d4f0e1f79144d2256a6dd22c4eb62d92034487f439b13ea475708" exitCode=0 Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.089459 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" event={"ID":"7dd55c20-99bd-40db-add9-cea3d9b7221f","Type":"ContainerDied","Data":"6c97051ccc0d4f0e1f79144d2256a6dd22c4eb62d92034487f439b13ea475708"} Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.090087 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" event={"ID":"7dd55c20-99bd-40db-add9-cea3d9b7221f","Type":"ContainerStarted","Data":"1867dbe90910292dda16eac2cf6b28b59e2dca3f457f55c0216d84dab8885d4f"} Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.090502 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" podUID="970a5ee9-ea1f-46f2-b20e-5311d2839d6d" containerName="dnsmasq-dns" containerID="cri-o://14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583" gracePeriod=10 Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.544218 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.666579 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-config\") pod \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.666680 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-nb\") pod \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.666815 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-swift-storage-0\") pod \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.666850 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-svc\") pod \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.666928 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kqpz\" (UniqueName: \"kubernetes.io/projected/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-kube-api-access-7kqpz\") pod \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.666976 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-sb\") pod \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\" (UID: \"970a5ee9-ea1f-46f2-b20e-5311d2839d6d\") " Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.675569 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-kube-api-access-7kqpz" (OuterVolumeSpecName: "kube-api-access-7kqpz") pod "970a5ee9-ea1f-46f2-b20e-5311d2839d6d" (UID: "970a5ee9-ea1f-46f2-b20e-5311d2839d6d"). InnerVolumeSpecName "kube-api-access-7kqpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.712338 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-config" (OuterVolumeSpecName: "config") pod "970a5ee9-ea1f-46f2-b20e-5311d2839d6d" (UID: "970a5ee9-ea1f-46f2-b20e-5311d2839d6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.716735 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "970a5ee9-ea1f-46f2-b20e-5311d2839d6d" (UID: "970a5ee9-ea1f-46f2-b20e-5311d2839d6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.720803 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "970a5ee9-ea1f-46f2-b20e-5311d2839d6d" (UID: "970a5ee9-ea1f-46f2-b20e-5311d2839d6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.721200 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "970a5ee9-ea1f-46f2-b20e-5311d2839d6d" (UID: "970a5ee9-ea1f-46f2-b20e-5311d2839d6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.727367 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "970a5ee9-ea1f-46f2-b20e-5311d2839d6d" (UID: "970a5ee9-ea1f-46f2-b20e-5311d2839d6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.769922 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.769965 4717 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.769977 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.769986 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kqpz\" (UniqueName: \"kubernetes.io/projected/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-kube-api-access-7kqpz\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.770001 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:27 crc kubenswrapper[4717]: I0218 12:06:27.770010 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/970a5ee9-ea1f-46f2-b20e-5311d2839d6d-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.026600 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.102942 4717 generic.go:334] "Generic (PLEG): container finished" podID="970a5ee9-ea1f-46f2-b20e-5311d2839d6d" containerID="14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583" exitCode=0 Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.103092 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" event={"ID":"970a5ee9-ea1f-46f2-b20e-5311d2839d6d","Type":"ContainerDied","Data":"14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583"} Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.103140 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" event={"ID":"970a5ee9-ea1f-46f2-b20e-5311d2839d6d","Type":"ContainerDied","Data":"11eff42ff1d1ad310ae23d85ec526562fca6dc7c1267bee9229186909eb4a775"} Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.103169 4717 scope.go:117] "RemoveContainer" containerID="14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.103180 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-bhxlz" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.106326 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" event={"ID":"7dd55c20-99bd-40db-add9-cea3d9b7221f","Type":"ContainerStarted","Data":"dfb1552549d5f55e7322bd8da54d9317e03e970bcf48d97bc275e99b52fa2201"} Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.107071 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.135471 4717 scope.go:117] "RemoveContainer" containerID="1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.150329 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" podStartSLOduration=3.15030789 podStartE2EDuration="3.15030789s" podCreationTimestamp="2026-02-18 12:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:28.148418458 +0000 UTC m=+1022.550519774" watchObservedRunningTime="2026-02-18 12:06:28.15030789 +0000 UTC m=+1022.552409206" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.177785 4717 scope.go:117] "RemoveContainer" containerID="14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583" Feb 18 12:06:28 crc kubenswrapper[4717]: E0218 12:06:28.184479 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583\": container with ID starting with 14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583 not found: ID does not exist" containerID="14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.184581 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583"} err="failed to get container status \"14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583\": rpc error: code = NotFound desc = could not find container \"14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583\": container with ID starting with 14f14b6635ca7f27e56c2dfdb7ccf82eecfa5088c652586307fbb5b050527583 not found: ID does not exist" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.184621 4717 scope.go:117] "RemoveContainer" containerID="1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860" Feb 18 12:06:28 crc kubenswrapper[4717]: E0218 12:06:28.188538 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860\": container with ID starting with 1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860 not found: ID does not exist" containerID="1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.188605 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860"} err="failed to get container status \"1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860\": rpc error: code = NotFound desc = could not find container \"1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860\": container with ID starting with 1fd1ca8a13f7a7e89c4de3bbdfa4f9dbbe44ad0ab4810d6dc8ccfd0e155a5860 not found: ID does not exist" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.189645 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bhxlz"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.199303 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bhxlz"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.410527 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.426808 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wbnxr"] Feb 18 12:06:28 crc kubenswrapper[4717]: E0218 12:06:28.427309 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970a5ee9-ea1f-46f2-b20e-5311d2839d6d" containerName="init" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.427333 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="970a5ee9-ea1f-46f2-b20e-5311d2839d6d" containerName="init" Feb 18 12:06:28 crc kubenswrapper[4717]: E0218 12:06:28.427355 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970a5ee9-ea1f-46f2-b20e-5311d2839d6d" containerName="dnsmasq-dns" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.427364 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="970a5ee9-ea1f-46f2-b20e-5311d2839d6d" containerName="dnsmasq-dns" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.427568 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="970a5ee9-ea1f-46f2-b20e-5311d2839d6d" containerName="dnsmasq-dns" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.428372 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wbnxr" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.453949 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wbnxr"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.485105 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x72rc\" (UniqueName: \"kubernetes.io/projected/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-kube-api-access-x72rc\") pod \"cinder-db-create-wbnxr\" (UID: \"8b10c7d3-ae5b-4a8a-bba0-a696225d8879\") " pod="openstack/cinder-db-create-wbnxr" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.485427 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-operator-scripts\") pod \"cinder-db-create-wbnxr\" (UID: \"8b10c7d3-ae5b-4a8a-bba0-a696225d8879\") " pod="openstack/cinder-db-create-wbnxr" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.587288 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x72rc\" (UniqueName: \"kubernetes.io/projected/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-kube-api-access-x72rc\") pod \"cinder-db-create-wbnxr\" (UID: \"8b10c7d3-ae5b-4a8a-bba0-a696225d8879\") " pod="openstack/cinder-db-create-wbnxr" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.587405 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-operator-scripts\") pod \"cinder-db-create-wbnxr\" (UID: \"8b10c7d3-ae5b-4a8a-bba0-a696225d8879\") " pod="openstack/cinder-db-create-wbnxr" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.588347 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-operator-scripts\") pod \"cinder-db-create-wbnxr\" (UID: \"8b10c7d3-ae5b-4a8a-bba0-a696225d8879\") " pod="openstack/cinder-db-create-wbnxr" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.607425 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x72rc\" (UniqueName: \"kubernetes.io/projected/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-kube-api-access-x72rc\") pod \"cinder-db-create-wbnxr\" (UID: \"8b10c7d3-ae5b-4a8a-bba0-a696225d8879\") " pod="openstack/cinder-db-create-wbnxr" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.615308 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-cgb2w"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.618594 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cgb2w" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.625365 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2c96-account-create-update-9pnt2"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.627085 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c96-account-create-update-9pnt2" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.634384 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.637531 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cgb2w"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.647973 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2c96-account-create-update-9pnt2"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.691783 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3dee21-58fd-4508-9549-30ef7d1e145b-operator-scripts\") pod \"cinder-2c96-account-create-update-9pnt2\" (UID: \"dc3dee21-58fd-4508-9549-30ef7d1e145b\") " pod="openstack/cinder-2c96-account-create-update-9pnt2" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.691872 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chwgz\" (UniqueName: \"kubernetes.io/projected/dc3dee21-58fd-4508-9549-30ef7d1e145b-kube-api-access-chwgz\") pod \"cinder-2c96-account-create-update-9pnt2\" (UID: \"dc3dee21-58fd-4508-9549-30ef7d1e145b\") " pod="openstack/cinder-2c96-account-create-update-9pnt2" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.691918 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qq8w\" (UniqueName: \"kubernetes.io/projected/e432a478-a2c5-40bf-adc3-306647151c92-kube-api-access-5qq8w\") pod \"neutron-db-create-cgb2w\" (UID: \"e432a478-a2c5-40bf-adc3-306647151c92\") " pod="openstack/neutron-db-create-cgb2w" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.691939 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e432a478-a2c5-40bf-adc3-306647151c92-operator-scripts\") pod \"neutron-db-create-cgb2w\" (UID: \"e432a478-a2c5-40bf-adc3-306647151c92\") " pod="openstack/neutron-db-create-cgb2w" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.747468 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wbnxr" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.793585 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3dee21-58fd-4508-9549-30ef7d1e145b-operator-scripts\") pod \"cinder-2c96-account-create-update-9pnt2\" (UID: \"dc3dee21-58fd-4508-9549-30ef7d1e145b\") " pod="openstack/cinder-2c96-account-create-update-9pnt2" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.794071 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chwgz\" (UniqueName: \"kubernetes.io/projected/dc3dee21-58fd-4508-9549-30ef7d1e145b-kube-api-access-chwgz\") pod \"cinder-2c96-account-create-update-9pnt2\" (UID: \"dc3dee21-58fd-4508-9549-30ef7d1e145b\") " pod="openstack/cinder-2c96-account-create-update-9pnt2" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.794146 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qq8w\" (UniqueName: \"kubernetes.io/projected/e432a478-a2c5-40bf-adc3-306647151c92-kube-api-access-5qq8w\") pod \"neutron-db-create-cgb2w\" (UID: \"e432a478-a2c5-40bf-adc3-306647151c92\") " pod="openstack/neutron-db-create-cgb2w" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.794230 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e432a478-a2c5-40bf-adc3-306647151c92-operator-scripts\") pod \"neutron-db-create-cgb2w\" (UID: \"e432a478-a2c5-40bf-adc3-306647151c92\") " pod="openstack/neutron-db-create-cgb2w" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.795228 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e432a478-a2c5-40bf-adc3-306647151c92-operator-scripts\") pod \"neutron-db-create-cgb2w\" (UID: \"e432a478-a2c5-40bf-adc3-306647151c92\") " pod="openstack/neutron-db-create-cgb2w" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.795893 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3dee21-58fd-4508-9549-30ef7d1e145b-operator-scripts\") pod \"cinder-2c96-account-create-update-9pnt2\" (UID: \"dc3dee21-58fd-4508-9549-30ef7d1e145b\") " pod="openstack/cinder-2c96-account-create-update-9pnt2" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.823204 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8kcn6"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.823936 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qq8w\" (UniqueName: \"kubernetes.io/projected/e432a478-a2c5-40bf-adc3-306647151c92-kube-api-access-5qq8w\") pod \"neutron-db-create-cgb2w\" (UID: \"e432a478-a2c5-40bf-adc3-306647151c92\") " pod="openstack/neutron-db-create-cgb2w" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.824070 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chwgz\" (UniqueName: \"kubernetes.io/projected/dc3dee21-58fd-4508-9549-30ef7d1e145b-kube-api-access-chwgz\") pod \"cinder-2c96-account-create-update-9pnt2\" (UID: \"dc3dee21-58fd-4508-9549-30ef7d1e145b\") " pod="openstack/cinder-2c96-account-create-update-9pnt2" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.828720 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8kcn6" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.844403 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8kcn6"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.895782 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t88zl\" (UniqueName: \"kubernetes.io/projected/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-kube-api-access-t88zl\") pod \"barbican-db-create-8kcn6\" (UID: \"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314\") " pod="openstack/barbican-db-create-8kcn6" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.895850 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-operator-scripts\") pod \"barbican-db-create-8kcn6\" (UID: \"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314\") " pod="openstack/barbican-db-create-8kcn6" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.909583 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-dz2k8"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.915297 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.929792 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.930130 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.930324 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.931213 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-djr66" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.947416 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dz2k8"] Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.977474 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cgb2w" Feb 18 12:06:28 crc kubenswrapper[4717]: I0218 12:06:28.996635 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c96-account-create-update-9pnt2" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:28.999307 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t88zl\" (UniqueName: \"kubernetes.io/projected/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-kube-api-access-t88zl\") pod \"barbican-db-create-8kcn6\" (UID: \"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314\") " pod="openstack/barbican-db-create-8kcn6" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:28.999390 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-operator-scripts\") pod \"barbican-db-create-8kcn6\" (UID: \"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314\") " pod="openstack/barbican-db-create-8kcn6" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:28.999483 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-config-data\") pod \"keystone-db-sync-dz2k8\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:28.999557 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsj4k\" (UniqueName: \"kubernetes.io/projected/546ddaab-7675-4c53-b86d-7f017d828784-kube-api-access-tsj4k\") pod \"keystone-db-sync-dz2k8\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:28.999680 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-combined-ca-bundle\") pod \"keystone-db-sync-dz2k8\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.003237 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-operator-scripts\") pod \"barbican-db-create-8kcn6\" (UID: \"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314\") " pod="openstack/barbican-db-create-8kcn6" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.103722 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-combined-ca-bundle\") pod \"keystone-db-sync-dz2k8\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.105001 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-config-data\") pod \"keystone-db-sync-dz2k8\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.105063 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsj4k\" (UniqueName: \"kubernetes.io/projected/546ddaab-7675-4c53-b86d-7f017d828784-kube-api-access-tsj4k\") pod \"keystone-db-sync-dz2k8\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.124073 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970a5ee9-ea1f-46f2-b20e-5311d2839d6d" path="/var/lib/kubelet/pods/970a5ee9-ea1f-46f2-b20e-5311d2839d6d/volumes" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.125424 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9f34-account-create-update-rlr2f"] Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.132551 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9f34-account-create-update-rlr2f" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.141131 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t88zl\" (UniqueName: \"kubernetes.io/projected/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-kube-api-access-t88zl\") pod \"barbican-db-create-8kcn6\" (UID: \"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314\") " pod="openstack/barbican-db-create-8kcn6" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.152779 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-config-data\") pod \"keystone-db-sync-dz2k8\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.169792 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.170160 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-combined-ca-bundle\") pod \"keystone-db-sync-dz2k8\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.170366 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsj4k\" (UniqueName: \"kubernetes.io/projected/546ddaab-7675-4c53-b86d-7f017d828784-kube-api-access-tsj4k\") pod \"keystone-db-sync-dz2k8\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.207858 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9f34-account-create-update-rlr2f"] Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.210500 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8kcn6" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.211974 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xjm4\" (UniqueName: \"kubernetes.io/projected/dd279242-4b9b-4172-a147-f69d3d74117b-kube-api-access-2xjm4\") pod \"neutron-9f34-account-create-update-rlr2f\" (UID: \"dd279242-4b9b-4172-a147-f69d3d74117b\") " pod="openstack/neutron-9f34-account-create-update-rlr2f" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.212106 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd279242-4b9b-4172-a147-f69d3d74117b-operator-scripts\") pod \"neutron-9f34-account-create-update-rlr2f\" (UID: \"dd279242-4b9b-4172-a147-f69d3d74117b\") " pod="openstack/neutron-9f34-account-create-update-rlr2f" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.225310 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-09da-account-create-update-wzt88"] Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.227174 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-09da-account-create-update-wzt88" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.236990 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.241526 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-09da-account-create-update-wzt88"] Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.241916 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.313871 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrw9m\" (UniqueName: \"kubernetes.io/projected/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-kube-api-access-hrw9m\") pod \"barbican-09da-account-create-update-wzt88\" (UID: \"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b\") " pod="openstack/barbican-09da-account-create-update-wzt88" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.314027 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-operator-scripts\") pod \"barbican-09da-account-create-update-wzt88\" (UID: \"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b\") " pod="openstack/barbican-09da-account-create-update-wzt88" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.314165 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd279242-4b9b-4172-a147-f69d3d74117b-operator-scripts\") pod \"neutron-9f34-account-create-update-rlr2f\" (UID: \"dd279242-4b9b-4172-a147-f69d3d74117b\") " pod="openstack/neutron-9f34-account-create-update-rlr2f" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.314452 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xjm4\" (UniqueName: \"kubernetes.io/projected/dd279242-4b9b-4172-a147-f69d3d74117b-kube-api-access-2xjm4\") pod \"neutron-9f34-account-create-update-rlr2f\" (UID: \"dd279242-4b9b-4172-a147-f69d3d74117b\") " pod="openstack/neutron-9f34-account-create-update-rlr2f" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.316734 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd279242-4b9b-4172-a147-f69d3d74117b-operator-scripts\") pod \"neutron-9f34-account-create-update-rlr2f\" (UID: \"dd279242-4b9b-4172-a147-f69d3d74117b\") " pod="openstack/neutron-9f34-account-create-update-rlr2f" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.340102 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xjm4\" (UniqueName: \"kubernetes.io/projected/dd279242-4b9b-4172-a147-f69d3d74117b-kube-api-access-2xjm4\") pod \"neutron-9f34-account-create-update-rlr2f\" (UID: \"dd279242-4b9b-4172-a147-f69d3d74117b\") " pod="openstack/neutron-9f34-account-create-update-rlr2f" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.415899 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrw9m\" (UniqueName: \"kubernetes.io/projected/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-kube-api-access-hrw9m\") pod \"barbican-09da-account-create-update-wzt88\" (UID: \"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b\") " pod="openstack/barbican-09da-account-create-update-wzt88" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.415974 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-operator-scripts\") pod \"barbican-09da-account-create-update-wzt88\" (UID: \"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b\") " pod="openstack/barbican-09da-account-create-update-wzt88" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.416881 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-operator-scripts\") pod \"barbican-09da-account-create-update-wzt88\" (UID: \"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b\") " pod="openstack/barbican-09da-account-create-update-wzt88" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.446906 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrw9m\" (UniqueName: \"kubernetes.io/projected/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-kube-api-access-hrw9m\") pod \"barbican-09da-account-create-update-wzt88\" (UID: \"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b\") " pod="openstack/barbican-09da-account-create-update-wzt88" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.509245 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9f34-account-create-update-rlr2f" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.587702 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-09da-account-create-update-wzt88" Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.736433 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wbnxr"] Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.816469 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2c96-account-create-update-9pnt2"] Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.840383 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cgb2w"] Feb 18 12:06:29 crc kubenswrapper[4717]: W0218 12:06:29.858649 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc3dee21_58fd_4508_9549_30ef7d1e145b.slice/crio-0cb8b5ab030ee83d45437a3ff7ba28244a4a0d08dedbdc2b40fd6b07d44f80ad WatchSource:0}: Error finding container 0cb8b5ab030ee83d45437a3ff7ba28244a4a0d08dedbdc2b40fd6b07d44f80ad: Status 404 returned error can't find the container with id 0cb8b5ab030ee83d45437a3ff7ba28244a4a0d08dedbdc2b40fd6b07d44f80ad Feb 18 12:06:29 crc kubenswrapper[4717]: I0218 12:06:29.958307 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dz2k8"] Feb 18 12:06:30 crc kubenswrapper[4717]: I0218 12:06:30.205892 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wbnxr" event={"ID":"8b10c7d3-ae5b-4a8a-bba0-a696225d8879","Type":"ContainerStarted","Data":"79120120721a34029d685ab0c58156c4287f6d59026cb793ebcb1cb68d76c247"} Feb 18 12:06:30 crc kubenswrapper[4717]: I0218 12:06:30.214698 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dz2k8" event={"ID":"546ddaab-7675-4c53-b86d-7f017d828784","Type":"ContainerStarted","Data":"8871aeaf83e2fb14252fb25776f58613ef29b7ae7893edb94689ba78f6554aba"} Feb 18 12:06:30 crc kubenswrapper[4717]: I0218 12:06:30.217540 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c96-account-create-update-9pnt2" event={"ID":"dc3dee21-58fd-4508-9549-30ef7d1e145b","Type":"ContainerStarted","Data":"0cb8b5ab030ee83d45437a3ff7ba28244a4a0d08dedbdc2b40fd6b07d44f80ad"} Feb 18 12:06:30 crc kubenswrapper[4717]: I0218 12:06:30.229004 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cgb2w" event={"ID":"e432a478-a2c5-40bf-adc3-306647151c92","Type":"ContainerStarted","Data":"a93a4c616c43b446c814ada828cca647e0ccda46286803522294a27670921e7d"} Feb 18 12:06:30 crc kubenswrapper[4717]: I0218 12:06:30.238454 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-wbnxr" podStartSLOduration=2.238423737 podStartE2EDuration="2.238423737s" podCreationTimestamp="2026-02-18 12:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:30.227472832 +0000 UTC m=+1024.629574168" watchObservedRunningTime="2026-02-18 12:06:30.238423737 +0000 UTC m=+1024.640525053" Feb 18 12:06:30 crc kubenswrapper[4717]: I0218 12:06:30.269257 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2c96-account-create-update-9pnt2" podStartSLOduration=2.269225763 podStartE2EDuration="2.269225763s" podCreationTimestamp="2026-02-18 12:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:30.260782139 +0000 UTC m=+1024.662883455" watchObservedRunningTime="2026-02-18 12:06:30.269225763 +0000 UTC m=+1024.671327079" Feb 18 12:06:30 crc kubenswrapper[4717]: I0218 12:06:30.289392 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8kcn6"] Feb 18 12:06:30 crc kubenswrapper[4717]: I0218 12:06:30.295811 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-cgb2w" podStartSLOduration=2.295782852 podStartE2EDuration="2.295782852s" podCreationTimestamp="2026-02-18 12:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:30.280929869 +0000 UTC m=+1024.683031205" watchObservedRunningTime="2026-02-18 12:06:30.295782852 +0000 UTC m=+1024.697884168" Feb 18 12:06:30 crc kubenswrapper[4717]: I0218 12:06:30.390735 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-09da-account-create-update-wzt88"] Feb 18 12:06:30 crc kubenswrapper[4717]: W0218 12:06:30.396281 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bf6f923_4c7b_4ba8_8545_1fb97ad5551b.slice/crio-f9210945c43024989fa3ad18c927b806982e303a303457b0299ae6dd6e30815a WatchSource:0}: Error finding container f9210945c43024989fa3ad18c927b806982e303a303457b0299ae6dd6e30815a: Status 404 returned error can't find the container with id f9210945c43024989fa3ad18c927b806982e303a303457b0299ae6dd6e30815a Feb 18 12:06:30 crc kubenswrapper[4717]: I0218 12:06:30.403800 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9f34-account-create-update-rlr2f"] Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.237721 4717 generic.go:334] "Generic (PLEG): container finished" podID="8b10c7d3-ae5b-4a8a-bba0-a696225d8879" containerID="945977b1eedcede8e7d09645421061525e3a3cf47ee24b39e1c2768a21281380" exitCode=0 Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.237944 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wbnxr" event={"ID":"8b10c7d3-ae5b-4a8a-bba0-a696225d8879","Type":"ContainerDied","Data":"945977b1eedcede8e7d09645421061525e3a3cf47ee24b39e1c2768a21281380"} Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.241091 4717 generic.go:334] "Generic (PLEG): container finished" podID="dc3dee21-58fd-4508-9549-30ef7d1e145b" containerID="f1259448d70e33739c061c221dfda715e99555b5eb48e3ccd923f3bd07018947" exitCode=0 Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.241162 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c96-account-create-update-9pnt2" event={"ID":"dc3dee21-58fd-4508-9549-30ef7d1e145b","Type":"ContainerDied","Data":"f1259448d70e33739c061c221dfda715e99555b5eb48e3ccd923f3bd07018947"} Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.243385 4717 generic.go:334] "Generic (PLEG): container finished" podID="e432a478-a2c5-40bf-adc3-306647151c92" containerID="1c9a726739db9e4d6bba234f379808e8e67d3ff8f1e31c822a7ec7bc908235b6" exitCode=0 Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.243450 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cgb2w" event={"ID":"e432a478-a2c5-40bf-adc3-306647151c92","Type":"ContainerDied","Data":"1c9a726739db9e4d6bba234f379808e8e67d3ff8f1e31c822a7ec7bc908235b6"} Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.250302 4717 generic.go:334] "Generic (PLEG): container finished" podID="15f9b780-b9ee-4fa6-bf61-c2b5ec92a314" containerID="92af7a2e788f5ebded441346b2f5ca1d8adecc8c4d19d95e27509c0d2c3e001c" exitCode=0 Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.250411 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8kcn6" event={"ID":"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314","Type":"ContainerDied","Data":"92af7a2e788f5ebded441346b2f5ca1d8adecc8c4d19d95e27509c0d2c3e001c"} Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.250446 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8kcn6" event={"ID":"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314","Type":"ContainerStarted","Data":"129d8cfb1466ca6c62ab25c983d3829a3113a1ed575d34adf17cbd8e3ff28114"} Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.256521 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9f34-account-create-update-rlr2f" event={"ID":"dd279242-4b9b-4172-a147-f69d3d74117b","Type":"ContainerStarted","Data":"9c081280555879ab4ac0d4fe8485f00a2df10b33f5f4a014358ab02b130859cd"} Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.256584 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9f34-account-create-update-rlr2f" event={"ID":"dd279242-4b9b-4172-a147-f69d3d74117b","Type":"ContainerStarted","Data":"feef60638e55e0cca3e7d091a2a76a5bb6d9c48481bc5247b84f68e5e613ec61"} Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.263369 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-09da-account-create-update-wzt88" event={"ID":"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b","Type":"ContainerStarted","Data":"ccc4d36e6632542ff1eed8016b104d40419a04f9867edf7eed407ed41e49996c"} Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.263424 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-09da-account-create-update-wzt88" event={"ID":"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b","Type":"ContainerStarted","Data":"f9210945c43024989fa3ad18c927b806982e303a303457b0299ae6dd6e30815a"} Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.334590 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9f34-account-create-update-rlr2f" podStartSLOduration=3.334563229 podStartE2EDuration="3.334563229s" podCreationTimestamp="2026-02-18 12:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:31.285841754 +0000 UTC m=+1025.687943070" watchObservedRunningTime="2026-02-18 12:06:31.334563229 +0000 UTC m=+1025.736664545" Feb 18 12:06:31 crc kubenswrapper[4717]: I0218 12:06:31.352428 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-09da-account-create-update-wzt88" podStartSLOduration=2.352404335 podStartE2EDuration="2.352404335s" podCreationTimestamp="2026-02-18 12:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:31.306026775 +0000 UTC m=+1025.708128111" watchObservedRunningTime="2026-02-18 12:06:31.352404335 +0000 UTC m=+1025.754505651" Feb 18 12:06:32 crc kubenswrapper[4717]: I0218 12:06:32.279067 4717 generic.go:334] "Generic (PLEG): container finished" podID="dd279242-4b9b-4172-a147-f69d3d74117b" containerID="9c081280555879ab4ac0d4fe8485f00a2df10b33f5f4a014358ab02b130859cd" exitCode=0 Feb 18 12:06:32 crc kubenswrapper[4717]: I0218 12:06:32.279173 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9f34-account-create-update-rlr2f" event={"ID":"dd279242-4b9b-4172-a147-f69d3d74117b","Type":"ContainerDied","Data":"9c081280555879ab4ac0d4fe8485f00a2df10b33f5f4a014358ab02b130859cd"} Feb 18 12:06:32 crc kubenswrapper[4717]: I0218 12:06:32.282917 4717 generic.go:334] "Generic (PLEG): container finished" podID="1bf6f923-4c7b-4ba8-8545-1fb97ad5551b" containerID="ccc4d36e6632542ff1eed8016b104d40419a04f9867edf7eed407ed41e49996c" exitCode=0 Feb 18 12:06:32 crc kubenswrapper[4717]: I0218 12:06:32.283069 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-09da-account-create-update-wzt88" event={"ID":"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b","Type":"ContainerDied","Data":"ccc4d36e6632542ff1eed8016b104d40419a04f9867edf7eed407ed41e49996c"} Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:32.874050 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cgb2w" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:32.910954 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qq8w\" (UniqueName: \"kubernetes.io/projected/e432a478-a2c5-40bf-adc3-306647151c92-kube-api-access-5qq8w\") pod \"e432a478-a2c5-40bf-adc3-306647151c92\" (UID: \"e432a478-a2c5-40bf-adc3-306647151c92\") " Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:32.911025 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e432a478-a2c5-40bf-adc3-306647151c92-operator-scripts\") pod \"e432a478-a2c5-40bf-adc3-306647151c92\" (UID: \"e432a478-a2c5-40bf-adc3-306647151c92\") " Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:32.913866 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e432a478-a2c5-40bf-adc3-306647151c92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e432a478-a2c5-40bf-adc3-306647151c92" (UID: "e432a478-a2c5-40bf-adc3-306647151c92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:32.965443 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e432a478-a2c5-40bf-adc3-306647151c92-kube-api-access-5qq8w" (OuterVolumeSpecName: "kube-api-access-5qq8w") pod "e432a478-a2c5-40bf-adc3-306647151c92" (UID: "e432a478-a2c5-40bf-adc3-306647151c92"). InnerVolumeSpecName "kube-api-access-5qq8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.013690 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qq8w\" (UniqueName: \"kubernetes.io/projected/e432a478-a2c5-40bf-adc3-306647151c92-kube-api-access-5qq8w\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.013720 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e432a478-a2c5-40bf-adc3-306647151c92-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.094355 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wbnxr" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.099887 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c96-account-create-update-9pnt2" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.112495 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8kcn6" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.220601 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chwgz\" (UniqueName: \"kubernetes.io/projected/dc3dee21-58fd-4508-9549-30ef7d1e145b-kube-api-access-chwgz\") pod \"dc3dee21-58fd-4508-9549-30ef7d1e145b\" (UID: \"dc3dee21-58fd-4508-9549-30ef7d1e145b\") " Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.220841 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-operator-scripts\") pod \"8b10c7d3-ae5b-4a8a-bba0-a696225d8879\" (UID: \"8b10c7d3-ae5b-4a8a-bba0-a696225d8879\") " Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.220905 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3dee21-58fd-4508-9549-30ef7d1e145b-operator-scripts\") pod \"dc3dee21-58fd-4508-9549-30ef7d1e145b\" (UID: \"dc3dee21-58fd-4508-9549-30ef7d1e145b\") " Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.220945 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t88zl\" (UniqueName: \"kubernetes.io/projected/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-kube-api-access-t88zl\") pod \"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314\" (UID: \"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314\") " Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.223577 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b10c7d3-ae5b-4a8a-bba0-a696225d8879" (UID: "8b10c7d3-ae5b-4a8a-bba0-a696225d8879"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.225084 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc3dee21-58fd-4508-9549-30ef7d1e145b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc3dee21-58fd-4508-9549-30ef7d1e145b" (UID: "dc3dee21-58fd-4508-9549-30ef7d1e145b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.225973 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-operator-scripts\") pod \"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314\" (UID: \"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314\") " Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.226187 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x72rc\" (UniqueName: \"kubernetes.io/projected/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-kube-api-access-x72rc\") pod \"8b10c7d3-ae5b-4a8a-bba0-a696225d8879\" (UID: \"8b10c7d3-ae5b-4a8a-bba0-a696225d8879\") " Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.226741 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15f9b780-b9ee-4fa6-bf61-c2b5ec92a314" (UID: "15f9b780-b9ee-4fa6-bf61-c2b5ec92a314"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.227736 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.227755 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.227766 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc3dee21-58fd-4508-9549-30ef7d1e145b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.236566 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-kube-api-access-x72rc" (OuterVolumeSpecName: "kube-api-access-x72rc") pod "8b10c7d3-ae5b-4a8a-bba0-a696225d8879" (UID: "8b10c7d3-ae5b-4a8a-bba0-a696225d8879"). InnerVolumeSpecName "kube-api-access-x72rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.236760 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3dee21-58fd-4508-9549-30ef7d1e145b-kube-api-access-chwgz" (OuterVolumeSpecName: "kube-api-access-chwgz") pod "dc3dee21-58fd-4508-9549-30ef7d1e145b" (UID: "dc3dee21-58fd-4508-9549-30ef7d1e145b"). InnerVolumeSpecName "kube-api-access-chwgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.237394 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-kube-api-access-t88zl" (OuterVolumeSpecName: "kube-api-access-t88zl") pod "15f9b780-b9ee-4fa6-bf61-c2b5ec92a314" (UID: "15f9b780-b9ee-4fa6-bf61-c2b5ec92a314"). InnerVolumeSpecName "kube-api-access-t88zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.295308 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wbnxr" event={"ID":"8b10c7d3-ae5b-4a8a-bba0-a696225d8879","Type":"ContainerDied","Data":"79120120721a34029d685ab0c58156c4287f6d59026cb793ebcb1cb68d76c247"} Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.295363 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79120120721a34029d685ab0c58156c4287f6d59026cb793ebcb1cb68d76c247" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.295463 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wbnxr" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.299006 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8kcn6" event={"ID":"15f9b780-b9ee-4fa6-bf61-c2b5ec92a314","Type":"ContainerDied","Data":"129d8cfb1466ca6c62ab25c983d3829a3113a1ed575d34adf17cbd8e3ff28114"} Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.299068 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="129d8cfb1466ca6c62ab25c983d3829a3113a1ed575d34adf17cbd8e3ff28114" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.299183 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8kcn6" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.302795 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2c96-account-create-update-9pnt2" event={"ID":"dc3dee21-58fd-4508-9549-30ef7d1e145b","Type":"ContainerDied","Data":"0cb8b5ab030ee83d45437a3ff7ba28244a4a0d08dedbdc2b40fd6b07d44f80ad"} Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.302833 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cb8b5ab030ee83d45437a3ff7ba28244a4a0d08dedbdc2b40fd6b07d44f80ad" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.302937 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2c96-account-create-update-9pnt2" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.312042 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cgb2w" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.312385 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cgb2w" event={"ID":"e432a478-a2c5-40bf-adc3-306647151c92","Type":"ContainerDied","Data":"a93a4c616c43b446c814ada828cca647e0ccda46286803522294a27670921e7d"} Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.312453 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a93a4c616c43b446c814ada828cca647e0ccda46286803522294a27670921e7d" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.329701 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x72rc\" (UniqueName: \"kubernetes.io/projected/8b10c7d3-ae5b-4a8a-bba0-a696225d8879-kube-api-access-x72rc\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.330210 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chwgz\" (UniqueName: \"kubernetes.io/projected/dc3dee21-58fd-4508-9549-30ef7d1e145b-kube-api-access-chwgz\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:33 crc kubenswrapper[4717]: I0218 12:06:33.330221 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t88zl\" (UniqueName: \"kubernetes.io/projected/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314-kube-api-access-t88zl\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:35 crc kubenswrapper[4717]: I0218 12:06:35.881553 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:06:35 crc kubenswrapper[4717]: I0218 12:06:35.953402 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-qb9hn"] Feb 18 12:06:35 crc kubenswrapper[4717]: I0218 12:06:35.953867 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-qb9hn" podUID="ea526311-abc8-4443-8345-75f047f7909a" containerName="dnsmasq-dns" containerID="cri-o://4b0ddd3de70c36b4da2c54c70b80014dc7288066de9d4a3f8d23f2603cacfc7c" gracePeriod=10 Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.361744 4717 generic.go:334] "Generic (PLEG): container finished" podID="ea526311-abc8-4443-8345-75f047f7909a" containerID="4b0ddd3de70c36b4da2c54c70b80014dc7288066de9d4a3f8d23f2603cacfc7c" exitCode=0 Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.361842 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-qb9hn" event={"ID":"ea526311-abc8-4443-8345-75f047f7909a","Type":"ContainerDied","Data":"4b0ddd3de70c36b4da2c54c70b80014dc7288066de9d4a3f8d23f2603cacfc7c"} Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.721570 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-09da-account-create-update-wzt88" Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.721578 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9f34-account-create-update-rlr2f" Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.815132 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrw9m\" (UniqueName: \"kubernetes.io/projected/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-kube-api-access-hrw9m\") pod \"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b\" (UID: \"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b\") " Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.815190 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd279242-4b9b-4172-a147-f69d3d74117b-operator-scripts\") pod \"dd279242-4b9b-4172-a147-f69d3d74117b\" (UID: \"dd279242-4b9b-4172-a147-f69d3d74117b\") " Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.815459 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xjm4\" (UniqueName: \"kubernetes.io/projected/dd279242-4b9b-4172-a147-f69d3d74117b-kube-api-access-2xjm4\") pod \"dd279242-4b9b-4172-a147-f69d3d74117b\" (UID: \"dd279242-4b9b-4172-a147-f69d3d74117b\") " Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.815522 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-operator-scripts\") pod \"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b\" (UID: \"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b\") " Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.816694 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bf6f923-4c7b-4ba8-8545-1fb97ad5551b" (UID: "1bf6f923-4c7b-4ba8-8545-1fb97ad5551b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.817937 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd279242-4b9b-4172-a147-f69d3d74117b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd279242-4b9b-4172-a147-f69d3d74117b" (UID: "dd279242-4b9b-4172-a147-f69d3d74117b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.836559 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-kube-api-access-hrw9m" (OuterVolumeSpecName: "kube-api-access-hrw9m") pod "1bf6f923-4c7b-4ba8-8545-1fb97ad5551b" (UID: "1bf6f923-4c7b-4ba8-8545-1fb97ad5551b"). InnerVolumeSpecName "kube-api-access-hrw9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.857562 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd279242-4b9b-4172-a147-f69d3d74117b-kube-api-access-2xjm4" (OuterVolumeSpecName: "kube-api-access-2xjm4") pod "dd279242-4b9b-4172-a147-f69d3d74117b" (UID: "dd279242-4b9b-4172-a147-f69d3d74117b"). InnerVolumeSpecName "kube-api-access-2xjm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.922634 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrw9m\" (UniqueName: \"kubernetes.io/projected/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-kube-api-access-hrw9m\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.922674 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd279242-4b9b-4172-a147-f69d3d74117b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.922690 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xjm4\" (UniqueName: \"kubernetes.io/projected/dd279242-4b9b-4172-a147-f69d3d74117b-kube-api-access-2xjm4\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:36 crc kubenswrapper[4717]: I0218 12:06:36.922701 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.140106 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.229111 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-sb\") pod \"ea526311-abc8-4443-8345-75f047f7909a\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.229203 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-config\") pod \"ea526311-abc8-4443-8345-75f047f7909a\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.229300 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-dns-svc\") pod \"ea526311-abc8-4443-8345-75f047f7909a\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.229361 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mddkx\" (UniqueName: \"kubernetes.io/projected/ea526311-abc8-4443-8345-75f047f7909a-kube-api-access-mddkx\") pod \"ea526311-abc8-4443-8345-75f047f7909a\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.229595 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-nb\") pod \"ea526311-abc8-4443-8345-75f047f7909a\" (UID: \"ea526311-abc8-4443-8345-75f047f7909a\") " Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.252785 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea526311-abc8-4443-8345-75f047f7909a-kube-api-access-mddkx" (OuterVolumeSpecName: "kube-api-access-mddkx") pod "ea526311-abc8-4443-8345-75f047f7909a" (UID: "ea526311-abc8-4443-8345-75f047f7909a"). InnerVolumeSpecName "kube-api-access-mddkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.295182 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea526311-abc8-4443-8345-75f047f7909a" (UID: "ea526311-abc8-4443-8345-75f047f7909a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.297589 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-config" (OuterVolumeSpecName: "config") pod "ea526311-abc8-4443-8345-75f047f7909a" (UID: "ea526311-abc8-4443-8345-75f047f7909a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.299227 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea526311-abc8-4443-8345-75f047f7909a" (UID: "ea526311-abc8-4443-8345-75f047f7909a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.305104 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea526311-abc8-4443-8345-75f047f7909a" (UID: "ea526311-abc8-4443-8345-75f047f7909a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.331903 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mddkx\" (UniqueName: \"kubernetes.io/projected/ea526311-abc8-4443-8345-75f047f7909a-kube-api-access-mddkx\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.331955 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.331970 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.331983 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.331997 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea526311-abc8-4443-8345-75f047f7909a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.379697 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-qb9hn" event={"ID":"ea526311-abc8-4443-8345-75f047f7909a","Type":"ContainerDied","Data":"46e04cf3c3596bcd1e15afbaa01575b3a261b2e0f90cb1e39119321a5306afb2"} Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.379846 4717 scope.go:117] "RemoveContainer" containerID="4b0ddd3de70c36b4da2c54c70b80014dc7288066de9d4a3f8d23f2603cacfc7c" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.379724 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-qb9hn" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.384471 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9f34-account-create-update-rlr2f" event={"ID":"dd279242-4b9b-4172-a147-f69d3d74117b","Type":"ContainerDied","Data":"feef60638e55e0cca3e7d091a2a76a5bb6d9c48481bc5247b84f68e5e613ec61"} Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.384510 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feef60638e55e0cca3e7d091a2a76a5bb6d9c48481bc5247b84f68e5e613ec61" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.384549 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9f34-account-create-update-rlr2f" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.395885 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-09da-account-create-update-wzt88" event={"ID":"1bf6f923-4c7b-4ba8-8545-1fb97ad5551b","Type":"ContainerDied","Data":"f9210945c43024989fa3ad18c927b806982e303a303457b0299ae6dd6e30815a"} Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.395937 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9210945c43024989fa3ad18c927b806982e303a303457b0299ae6dd6e30815a" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.396032 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-09da-account-create-update-wzt88" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.401204 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dz2k8" event={"ID":"546ddaab-7675-4c53-b86d-7f017d828784","Type":"ContainerStarted","Data":"00e95ce67a29c6b125fd07158f0744e63668e4aa36170ff6825e76022bf4d4ca"} Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.418106 4717 scope.go:117] "RemoveContainer" containerID="d09269bc43bd80baa3a585d8a05be340577cbbb999e61a37c77e3afd24f1a700" Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.426216 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-qb9hn"] Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.434390 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-qb9hn"] Feb 18 12:06:37 crc kubenswrapper[4717]: I0218 12:06:37.451176 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-dz2k8" podStartSLOduration=2.724585171 podStartE2EDuration="9.451148348s" podCreationTimestamp="2026-02-18 12:06:28 +0000 UTC" firstStartedPulling="2026-02-18 12:06:29.978674091 +0000 UTC m=+1024.380775407" lastFinishedPulling="2026-02-18 12:06:36.705237258 +0000 UTC m=+1031.107338584" observedRunningTime="2026-02-18 12:06:37.449160992 +0000 UTC m=+1031.851262308" watchObservedRunningTime="2026-02-18 12:06:37.451148348 +0000 UTC m=+1031.853249664" Feb 18 12:06:39 crc kubenswrapper[4717]: I0218 12:06:39.048324 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea526311-abc8-4443-8345-75f047f7909a" path="/var/lib/kubelet/pods/ea526311-abc8-4443-8345-75f047f7909a/volumes" Feb 18 12:06:41 crc kubenswrapper[4717]: I0218 12:06:41.439386 4717 generic.go:334] "Generic (PLEG): container finished" podID="546ddaab-7675-4c53-b86d-7f017d828784" containerID="00e95ce67a29c6b125fd07158f0744e63668e4aa36170ff6825e76022bf4d4ca" exitCode=0 Feb 18 12:06:41 crc kubenswrapper[4717]: I0218 12:06:41.439490 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dz2k8" event={"ID":"546ddaab-7675-4c53-b86d-7f017d828784","Type":"ContainerDied","Data":"00e95ce67a29c6b125fd07158f0744e63668e4aa36170ff6825e76022bf4d4ca"} Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.773172 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.773726 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.799505 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.857146 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-combined-ca-bundle\") pod \"546ddaab-7675-4c53-b86d-7f017d828784\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.857216 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsj4k\" (UniqueName: \"kubernetes.io/projected/546ddaab-7675-4c53-b86d-7f017d828784-kube-api-access-tsj4k\") pod \"546ddaab-7675-4c53-b86d-7f017d828784\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.857470 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-config-data\") pod \"546ddaab-7675-4c53-b86d-7f017d828784\" (UID: \"546ddaab-7675-4c53-b86d-7f017d828784\") " Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.865703 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/546ddaab-7675-4c53-b86d-7f017d828784-kube-api-access-tsj4k" (OuterVolumeSpecName: "kube-api-access-tsj4k") pod "546ddaab-7675-4c53-b86d-7f017d828784" (UID: "546ddaab-7675-4c53-b86d-7f017d828784"). InnerVolumeSpecName "kube-api-access-tsj4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.886615 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "546ddaab-7675-4c53-b86d-7f017d828784" (UID: "546ddaab-7675-4c53-b86d-7f017d828784"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.908203 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-config-data" (OuterVolumeSpecName: "config-data") pod "546ddaab-7675-4c53-b86d-7f017d828784" (UID: "546ddaab-7675-4c53-b86d-7f017d828784"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.960409 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsj4k\" (UniqueName: \"kubernetes.io/projected/546ddaab-7675-4c53-b86d-7f017d828784-kube-api-access-tsj4k\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.960474 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:42 crc kubenswrapper[4717]: I0218 12:06:42.960488 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ddaab-7675-4c53-b86d-7f017d828784-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.464717 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dz2k8" event={"ID":"546ddaab-7675-4c53-b86d-7f017d828784","Type":"ContainerDied","Data":"8871aeaf83e2fb14252fb25776f58613ef29b7ae7893edb94689ba78f6554aba"} Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.464784 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8871aeaf83e2fb14252fb25776f58613ef29b7ae7893edb94689ba78f6554aba" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.464865 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dz2k8" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.800925 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-9xbgj"] Feb 18 12:06:43 crc kubenswrapper[4717]: E0218 12:06:43.801436 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b10c7d3-ae5b-4a8a-bba0-a696225d8879" containerName="mariadb-database-create" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801452 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b10c7d3-ae5b-4a8a-bba0-a696225d8879" containerName="mariadb-database-create" Feb 18 12:06:43 crc kubenswrapper[4717]: E0218 12:06:43.801464 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf6f923-4c7b-4ba8-8545-1fb97ad5551b" containerName="mariadb-account-create-update" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801469 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf6f923-4c7b-4ba8-8545-1fb97ad5551b" containerName="mariadb-account-create-update" Feb 18 12:06:43 crc kubenswrapper[4717]: E0218 12:06:43.801482 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea526311-abc8-4443-8345-75f047f7909a" containerName="init" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801489 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea526311-abc8-4443-8345-75f047f7909a" containerName="init" Feb 18 12:06:43 crc kubenswrapper[4717]: E0218 12:06:43.801502 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e432a478-a2c5-40bf-adc3-306647151c92" containerName="mariadb-database-create" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801508 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e432a478-a2c5-40bf-adc3-306647151c92" containerName="mariadb-database-create" Feb 18 12:06:43 crc kubenswrapper[4717]: E0218 12:06:43.801527 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea526311-abc8-4443-8345-75f047f7909a" containerName="dnsmasq-dns" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801532 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea526311-abc8-4443-8345-75f047f7909a" containerName="dnsmasq-dns" Feb 18 12:06:43 crc kubenswrapper[4717]: E0218 12:06:43.801546 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f9b780-b9ee-4fa6-bf61-c2b5ec92a314" containerName="mariadb-database-create" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801551 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f9b780-b9ee-4fa6-bf61-c2b5ec92a314" containerName="mariadb-database-create" Feb 18 12:06:43 crc kubenswrapper[4717]: E0218 12:06:43.801565 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3dee21-58fd-4508-9549-30ef7d1e145b" containerName="mariadb-account-create-update" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801585 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3dee21-58fd-4508-9549-30ef7d1e145b" containerName="mariadb-account-create-update" Feb 18 12:06:43 crc kubenswrapper[4717]: E0218 12:06:43.801596 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd279242-4b9b-4172-a147-f69d3d74117b" containerName="mariadb-account-create-update" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801602 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd279242-4b9b-4172-a147-f69d3d74117b" containerName="mariadb-account-create-update" Feb 18 12:06:43 crc kubenswrapper[4717]: E0218 12:06:43.801610 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="546ddaab-7675-4c53-b86d-7f017d828784" containerName="keystone-db-sync" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801616 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="546ddaab-7675-4c53-b86d-7f017d828784" containerName="keystone-db-sync" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801812 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="e432a478-a2c5-40bf-adc3-306647151c92" containerName="mariadb-database-create" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801824 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f9b780-b9ee-4fa6-bf61-c2b5ec92a314" containerName="mariadb-database-create" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801835 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b10c7d3-ae5b-4a8a-bba0-a696225d8879" containerName="mariadb-database-create" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801843 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea526311-abc8-4443-8345-75f047f7909a" containerName="dnsmasq-dns" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801854 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="546ddaab-7675-4c53-b86d-7f017d828784" containerName="keystone-db-sync" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801865 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd279242-4b9b-4172-a147-f69d3d74117b" containerName="mariadb-account-create-update" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801877 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf6f923-4c7b-4ba8-8545-1fb97ad5551b" containerName="mariadb-account-create-update" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.801885 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3dee21-58fd-4508-9549-30ef7d1e145b" containerName="mariadb-account-create-update" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.802843 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.813762 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s9f7t"] Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.815248 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.831608 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.831764 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-djr66" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.831870 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.831950 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.832097 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.833012 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-9xbgj"] Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.872708 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s9f7t"] Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878579 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldmb7\" (UniqueName: \"kubernetes.io/projected/9df70013-9402-4d3f-b79f-9cdbe05a575a-kube-api-access-ldmb7\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878648 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878674 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878739 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-combined-ca-bundle\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878768 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878790 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-scripts\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878816 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-credential-keys\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878838 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqw44\" (UniqueName: \"kubernetes.io/projected/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-kube-api-access-kqw44\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878859 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-svc\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878884 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-config-data\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878923 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-fernet-keys\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.878946 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-config\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.980430 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldmb7\" (UniqueName: \"kubernetes.io/projected/9df70013-9402-4d3f-b79f-9cdbe05a575a-kube-api-access-ldmb7\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.980524 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.980674 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.980950 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-combined-ca-bundle\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.981019 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-scripts\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.981046 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.981107 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-credential-keys\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.981169 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqw44\" (UniqueName: \"kubernetes.io/projected/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-kube-api-access-kqw44\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.981201 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-svc\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.981250 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-config-data\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.981396 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-fernet-keys\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.981423 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-config\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.981559 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.982431 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-config\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.983092 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.983468 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-svc\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.984000 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-pdj8h"] Feb 18 12:06:43 crc kubenswrapper[4717]: I0218 12:06:43.986307 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:43.987307 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.011051 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-combined-ca-bundle\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.011430 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-config-data\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.011878 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-scripts\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.013867 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.014050 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kwqwh" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.014868 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqw44\" (UniqueName: \"kubernetes.io/projected/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-kube-api-access-kqw44\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.015150 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.015949 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldmb7\" (UniqueName: \"kubernetes.io/projected/9df70013-9402-4d3f-b79f-9cdbe05a575a-kube-api-access-ldmb7\") pod \"dnsmasq-dns-847c4cc679-9xbgj\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.017486 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-fernet-keys\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.017871 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-credential-keys\") pod \"keystone-bootstrap-s9f7t\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.057206 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pdj8h"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.079238 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-698cfb4c87-mhbhr"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.081111 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.083671 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-scripts\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.083811 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njt9z\" (UniqueName: \"kubernetes.io/projected/70292f47-9494-42eb-a5ae-041c4bfc01ea-kube-api-access-njt9z\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.083846 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/70292f47-9494-42eb-a5ae-041c4bfc01ea-etc-machine-id\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.083903 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-db-sync-config-data\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.083931 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-config-data\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.083997 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-combined-ca-bundle\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.086017 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.086066 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.089436 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.096409 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-lkrp4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.120351 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-698cfb4c87-mhbhr"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.178022 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.185681 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-scripts\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.185776 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-horizon-secret-key\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.185826 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-scripts\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.185880 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njt9z\" (UniqueName: \"kubernetes.io/projected/70292f47-9494-42eb-a5ae-041c4bfc01ea-kube-api-access-njt9z\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.185914 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/70292f47-9494-42eb-a5ae-041c4bfc01ea-etc-machine-id\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.185942 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-db-sync-config-data\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.185976 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-config-data\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.186007 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-logs\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.186053 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-combined-ca-bundle\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.186074 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpf6d\" (UniqueName: \"kubernetes.io/projected/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-kube-api-access-zpf6d\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.186125 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-config-data\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.197391 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/70292f47-9494-42eb-a5ae-041c4bfc01ea-etc-machine-id\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.197764 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.206145 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-scripts\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.216714 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-db-sync-config-data\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.221208 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-combined-ca-bundle\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.227578 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-config-data\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.234228 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njt9z\" (UniqueName: \"kubernetes.io/projected/70292f47-9494-42eb-a5ae-041c4bfc01ea-kube-api-access-njt9z\") pod \"cinder-db-sync-pdj8h\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.247192 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wbtkn"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.248673 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.251729 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wbtkn"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.262464 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-w4lpd" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.262823 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.299481 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-horizon-secret-key\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.299571 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-scripts\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.299648 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr6zv\" (UniqueName: \"kubernetes.io/projected/cc0bc5b3-673e-46dc-941a-151096e1831b-kube-api-access-hr6zv\") pod \"barbican-db-sync-wbtkn\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.299707 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-logs\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.299751 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpf6d\" (UniqueName: \"kubernetes.io/projected/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-kube-api-access-zpf6d\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.299778 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-db-sync-config-data\") pod \"barbican-db-sync-wbtkn\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.299816 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-config-data\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.299837 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-combined-ca-bundle\") pod \"barbican-db-sync-wbtkn\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.300998 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-logs\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.302070 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-scripts\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.304722 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-config-data\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.311145 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-horizon-secret-key\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.399174 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpf6d\" (UniqueName: \"kubernetes.io/projected/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-kube-api-access-zpf6d\") pod \"horizon-698cfb4c87-mhbhr\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.410032 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr6zv\" (UniqueName: \"kubernetes.io/projected/cc0bc5b3-673e-46dc-941a-151096e1831b-kube-api-access-hr6zv\") pod \"barbican-db-sync-wbtkn\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.410188 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-db-sync-config-data\") pod \"barbican-db-sync-wbtkn\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.410247 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-combined-ca-bundle\") pod \"barbican-db-sync-wbtkn\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.427178 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-combined-ca-bundle\") pod \"barbican-db-sync-wbtkn\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.427386 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-77mb9"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.438807 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.442951 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.452985 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.454094 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.479714 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-db-sync-config-data\") pod \"barbican-db-sync-wbtkn\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.480348 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rp5hc" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.484499 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.491982 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-9xbgj"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.503952 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr6zv\" (UniqueName: \"kubernetes.io/projected/cc0bc5b3-673e-46dc-941a-151096e1831b-kube-api-access-hr6zv\") pod \"barbican-db-sync-wbtkn\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.518014 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-combined-ca-bundle\") pod \"neutron-db-sync-77mb9\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.518135 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-config\") pod \"neutron-db-sync-77mb9\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.518204 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgvq\" (UniqueName: \"kubernetes.io/projected/c7893e53-3622-4455-8eb8-459235541b6a-kube-api-access-wxgvq\") pod \"neutron-db-sync-77mb9\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.518297 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mlr26"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.520091 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.525818 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vtln4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.526068 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.529410 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.553193 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-77mb9"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.566971 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.571931 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.592797 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.593086 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.619916 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.619970 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.620035 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-config-data\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.620065 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-scripts\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.620119 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-combined-ca-bundle\") pod \"neutron-db-sync-77mb9\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.620152 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz9d9\" (UniqueName: \"kubernetes.io/projected/608ad4dc-1e71-408a-9aea-015949cf9aff-kube-api-access-lz9d9\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.620182 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-config-data\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.620211 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-log-httpd\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.620296 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608ad4dc-1e71-408a-9aea-015949cf9aff-logs\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.620326 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-config\") pod \"neutron-db-sync-77mb9\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.622006 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-run-httpd\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.622275 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgvq\" (UniqueName: \"kubernetes.io/projected/c7893e53-3622-4455-8eb8-459235541b6a-kube-api-access-wxgvq\") pod \"neutron-db-sync-77mb9\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.622299 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-combined-ca-bundle\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.622334 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7f2\" (UniqueName: \"kubernetes.io/projected/ce4f631a-18d2-46ea-b1e1-17b26808f94d-kube-api-access-8x7f2\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.622432 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-scripts\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.625108 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mlr26"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.643849 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-combined-ca-bundle\") pod \"neutron-db-sync-77mb9\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.653848 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4sqcz"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.655521 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.657057 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-config\") pod \"neutron-db-sync-77mb9\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.674495 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgvq\" (UniqueName: \"kubernetes.io/projected/c7893e53-3622-4455-8eb8-459235541b6a-kube-api-access-wxgvq\") pod \"neutron-db-sync-77mb9\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.696592 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.712791 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b69c68f97-t72j4"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.714579 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.724194 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.729932 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730045 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730123 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-config-data\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730173 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-scripts\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730249 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q9gp\" (UniqueName: \"kubernetes.io/projected/846dd5ee-5395-4934-8d9f-c39bbc189e49-kube-api-access-7q9gp\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730302 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730380 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-config\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730421 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9d9\" (UniqueName: \"kubernetes.io/projected/608ad4dc-1e71-408a-9aea-015949cf9aff-kube-api-access-lz9d9\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730468 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-config-data\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730517 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-log-httpd\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730544 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730702 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608ad4dc-1e71-408a-9aea-015949cf9aff-logs\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730788 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-run-httpd\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730857 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-combined-ca-bundle\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730882 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.730920 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x7f2\" (UniqueName: \"kubernetes.io/projected/ce4f631a-18d2-46ea-b1e1-17b26808f94d-kube-api-access-8x7f2\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.731000 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-scripts\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.734831 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608ad4dc-1e71-408a-9aea-015949cf9aff-logs\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.736892 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-run-httpd\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.747999 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-log-httpd\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.749379 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.770838 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.787961 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-combined-ca-bundle\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.788495 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz9d9\" (UniqueName: \"kubernetes.io/projected/608ad4dc-1e71-408a-9aea-015949cf9aff-kube-api-access-lz9d9\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.789962 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.789992 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-config-data\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.792711 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-scripts\") pod \"placement-db-sync-mlr26\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.802938 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-scripts\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.804122 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x7f2\" (UniqueName: \"kubernetes.io/projected/ce4f631a-18d2-46ea-b1e1-17b26808f94d-kube-api-access-8x7f2\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.810233 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-config-data\") pod \"ceilometer-0\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.820304 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4sqcz"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.833387 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4dpb\" (UniqueName: \"kubernetes.io/projected/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-kube-api-access-w4dpb\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.833469 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.833464 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.834198 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-config-data\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.834385 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.834454 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-horizon-secret-key\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.834589 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.834614 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q9gp\" (UniqueName: \"kubernetes.io/projected/846dd5ee-5395-4934-8d9f-c39bbc189e49-kube-api-access-7q9gp\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.834663 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.834733 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-config\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.834765 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-scripts\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.834834 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.834881 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-logs\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.835549 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.836412 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.838691 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.841464 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-c7wn5" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.841801 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.845284 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.847941 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-config\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.848073 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-77mb9" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.848404 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.852643 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.875175 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b69c68f97-t72j4"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.876313 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mlr26" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.886024 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.888404 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q9gp\" (UniqueName: \"kubernetes.io/projected/846dd5ee-5395-4934-8d9f-c39bbc189e49-kube-api-access-7q9gp\") pod \"dnsmasq-dns-785d8bcb8c-4sqcz\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.919319 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.920963 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.921390 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.928230 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.928817 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938126 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-logs\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938215 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-logs\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938309 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4dpb\" (UniqueName: \"kubernetes.io/projected/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-kube-api-access-w4dpb\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938336 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938397 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-config-data\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938419 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938450 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938490 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938552 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-horizon-secret-key\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938645 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-scripts\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938705 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwjhm\" (UniqueName: \"kubernetes.io/projected/8e344f46-ef50-46f9-9e95-3955e29e7192-kube-api-access-wwjhm\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938730 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-scripts\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.938759 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-config-data\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.939985 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-logs\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.944621 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-scripts\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.954288 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-config-data\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.969697 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4dpb\" (UniqueName: \"kubernetes.io/projected/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-kube-api-access-w4dpb\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.971893 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-horizon-secret-key\") pod \"horizon-b69c68f97-t72j4\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:44 crc kubenswrapper[4717]: I0218 12:06:44.985574 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.044045 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.044099 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.044144 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.044211 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-scripts\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.044239 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwjhm\" (UniqueName: \"kubernetes.io/projected/8e344f46-ef50-46f9-9e95-3955e29e7192-kube-api-access-wwjhm\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.044303 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-config-data\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.044360 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-logs\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.044420 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.050788 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.063918 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-scripts\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.065661 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.066585 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-logs\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.103515 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.105615 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.128953 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-config-data\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.142726 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.159386 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.159724 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.159879 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngb8j\" (UniqueName: \"kubernetes.io/projected/70544872-46d3-4011-8571-db370250bafc-kube-api-access-ngb8j\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.160011 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.160152 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-logs\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.160306 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.160427 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.160560 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.168054 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwjhm\" (UniqueName: \"kubernetes.io/projected/8e344f46-ef50-46f9-9e95-3955e29e7192-kube-api-access-wwjhm\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.179959 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.205742 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.262388 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.262621 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-logs\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.262735 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.262843 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.262934 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.263007 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.263077 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.263201 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngb8j\" (UniqueName: \"kubernetes.io/projected/70544872-46d3-4011-8571-db370250bafc-kube-api-access-ngb8j\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.271929 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-logs\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.272185 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.282397 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.290743 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.291529 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.299820 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.301300 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.466185 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngb8j\" (UniqueName: \"kubernetes.io/projected/70544872-46d3-4011-8571-db370250bafc-kube-api-access-ngb8j\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.503253 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.504246 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.677244 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.791285 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-9xbgj"] Feb 18 12:06:45 crc kubenswrapper[4717]: W0218 12:06:45.805580 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9df70013_9402_4d3f_b79f_9cdbe05a575a.slice/crio-6ff8f062df8b8e069d5b7c1af07309fb54c3c3d19ce12b3baf1ff65f6039a1bc WatchSource:0}: Error finding container 6ff8f062df8b8e069d5b7c1af07309fb54c3c3d19ce12b3baf1ff65f6039a1bc: Status 404 returned error can't find the container with id 6ff8f062df8b8e069d5b7c1af07309fb54c3c3d19ce12b3baf1ff65f6039a1bc Feb 18 12:06:45 crc kubenswrapper[4717]: W0218 12:06:45.909128 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod135500b9_8e5d_4e1b_a64b_a8dd96e9f6df.slice/crio-ed94e52749659affbbd48896ab08a1c3851e8cbb225bfe1f61652bf823afd482 WatchSource:0}: Error finding container ed94e52749659affbbd48896ab08a1c3851e8cbb225bfe1f61652bf823afd482: Status 404 returned error can't find the container with id ed94e52749659affbbd48896ab08a1c3851e8cbb225bfe1f61652bf823afd482 Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.920020 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s9f7t"] Feb 18 12:06:45 crc kubenswrapper[4717]: I0218 12:06:45.977187 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-698cfb4c87-mhbhr"] Feb 18 12:06:45 crc kubenswrapper[4717]: W0218 12:06:45.991386 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ad902d2_747a_4907_9d90_fbbfbfb96ce5.slice/crio-86de35f1aed99351f53228a020416aeeb29a96533f369a60b47f03421b4b465e WatchSource:0}: Error finding container 86de35f1aed99351f53228a020416aeeb29a96533f369a60b47f03421b4b465e: Status 404 returned error can't find the container with id 86de35f1aed99351f53228a020416aeeb29a96533f369a60b47f03421b4b465e Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.047966 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pdj8h"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.368245 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.475983 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wbtkn"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.510227 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b69c68f97-t72j4"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.544514 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-698cfb4c87-mhbhr"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.566346 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mlr26"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.601971 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4sqcz"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.617283 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.643394 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f85b87657-6chdr"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.645275 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.688532 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.702099 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" event={"ID":"846dd5ee-5395-4934-8d9f-c39bbc189e49","Type":"ContainerStarted","Data":"13d186c81685bbd04e1a40d4491068caf25e9211172e6cb64fe7ef5284398aec"} Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.707528 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mlr26" event={"ID":"608ad4dc-1e71-408a-9aea-015949cf9aff","Type":"ContainerStarted","Data":"de4deb8ef42d41fcd8c7f206a1654b85d21af286f8caddf9cd9c38a77e6308c9"} Feb 18 12:06:46 crc kubenswrapper[4717]: W0218 12:06:46.710082 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7893e53_3622_4455_8eb8_459235541b6a.slice/crio-334eaa8fa8b2ccdf115e5670afdeb886df16ddca163aef6ea4f2848e641182d1 WatchSource:0}: Error finding container 334eaa8fa8b2ccdf115e5670afdeb886df16ddca163aef6ea4f2848e641182d1: Status 404 returned error can't find the container with id 334eaa8fa8b2ccdf115e5670afdeb886df16ddca163aef6ea4f2848e641182d1 Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.714338 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f85b87657-6chdr"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.732566 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.733433 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pdj8h" event={"ID":"70292f47-9494-42eb-a5ae-041c4bfc01ea","Type":"ContainerStarted","Data":"a0fd5c79474fe4fcd19261b3e42f0e45130c663ecdc83932a08d03175a4946ec"} Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.749623 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-77mb9"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.769123 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcngd\" (UniqueName: \"kubernetes.io/projected/119c98e2-712a-4f01-b7e4-756b44e3968c-kube-api-access-lcngd\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.774548 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/119c98e2-712a-4f01-b7e4-756b44e3968c-logs\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.774892 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/119c98e2-712a-4f01-b7e4-756b44e3968c-horizon-secret-key\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.775163 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-config-data\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.775316 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-scripts\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.774913 4717 generic.go:334] "Generic (PLEG): container finished" podID="9df70013-9402-4d3f-b79f-9cdbe05a575a" containerID="1aa64774979934f7524d6a7618caecb116b5d3f4245a90cee5ad5fa07ba59bb9" exitCode=0 Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.774961 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" event={"ID":"9df70013-9402-4d3f-b79f-9cdbe05a575a","Type":"ContainerDied","Data":"1aa64774979934f7524d6a7618caecb116b5d3f4245a90cee5ad5fa07ba59bb9"} Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.775759 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" event={"ID":"9df70013-9402-4d3f-b79f-9cdbe05a575a","Type":"ContainerStarted","Data":"6ff8f062df8b8e069d5b7c1af07309fb54c3c3d19ce12b3baf1ff65f6039a1bc"} Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.787849 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s9f7t" event={"ID":"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df","Type":"ContainerStarted","Data":"bf9905cd8cf1d1b69a90269374cd6e6f0fa0a3fb556a86ffe9312a10ef3fcb65"} Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.787913 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s9f7t" event={"ID":"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df","Type":"ContainerStarted","Data":"ed94e52749659affbbd48896ab08a1c3851e8cbb225bfe1f61652bf823afd482"} Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.794315 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce4f631a-18d2-46ea-b1e1-17b26808f94d","Type":"ContainerStarted","Data":"e04b7e9029bb59345f9cccdbe9516bd20aa0909d1aaa533f14d7fe3f5e899a4f"} Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.810379 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b69c68f97-t72j4" event={"ID":"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8","Type":"ContainerStarted","Data":"a6fd1cc7cb12b10fdcf76f6aa8930acd66dc564b2a82d13a9224b21d795aee7e"} Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.818152 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.831789 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-698cfb4c87-mhbhr" event={"ID":"7ad902d2-747a-4907-9d90-fbbfbfb96ce5","Type":"ContainerStarted","Data":"86de35f1aed99351f53228a020416aeeb29a96533f369a60b47f03421b4b465e"} Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.848062 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wbtkn" event={"ID":"cc0bc5b3-673e-46dc-941a-151096e1831b","Type":"ContainerStarted","Data":"4922a8b517bb91e7d293acff1916ff75cb5550200869cda2677810c35aa5a463"} Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.849217 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s9f7t" podStartSLOduration=3.849192135 podStartE2EDuration="3.849192135s" podCreationTimestamp="2026-02-18 12:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:46.840852613 +0000 UTC m=+1041.242980259" watchObservedRunningTime="2026-02-18 12:06:46.849192135 +0000 UTC m=+1041.251293451" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.877418 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcngd\" (UniqueName: \"kubernetes.io/projected/119c98e2-712a-4f01-b7e4-756b44e3968c-kube-api-access-lcngd\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.877612 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/119c98e2-712a-4f01-b7e4-756b44e3968c-logs\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.877694 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/119c98e2-712a-4f01-b7e4-756b44e3968c-horizon-secret-key\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.877794 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-config-data\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.877856 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-scripts\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.878925 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-scripts\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.879679 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/119c98e2-712a-4f01-b7e4-756b44e3968c-logs\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.880061 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-config-data\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.887952 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/119c98e2-712a-4f01-b7e4-756b44e3968c-horizon-secret-key\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:46 crc kubenswrapper[4717]: I0218 12:06:46.925219 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcngd\" (UniqueName: \"kubernetes.io/projected/119c98e2-712a-4f01-b7e4-756b44e3968c-kube-api-access-lcngd\") pod \"horizon-f85b87657-6chdr\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.002660 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.418844 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.524490 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-nb\") pod \"9df70013-9402-4d3f-b79f-9cdbe05a575a\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.525154 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-config\") pod \"9df70013-9402-4d3f-b79f-9cdbe05a575a\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.525316 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldmb7\" (UniqueName: \"kubernetes.io/projected/9df70013-9402-4d3f-b79f-9cdbe05a575a-kube-api-access-ldmb7\") pod \"9df70013-9402-4d3f-b79f-9cdbe05a575a\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.525412 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-swift-storage-0\") pod \"9df70013-9402-4d3f-b79f-9cdbe05a575a\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.525489 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-svc\") pod \"9df70013-9402-4d3f-b79f-9cdbe05a575a\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.525553 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-sb\") pod \"9df70013-9402-4d3f-b79f-9cdbe05a575a\" (UID: \"9df70013-9402-4d3f-b79f-9cdbe05a575a\") " Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.535600 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df70013-9402-4d3f-b79f-9cdbe05a575a-kube-api-access-ldmb7" (OuterVolumeSpecName: "kube-api-access-ldmb7") pod "9df70013-9402-4d3f-b79f-9cdbe05a575a" (UID: "9df70013-9402-4d3f-b79f-9cdbe05a575a"). InnerVolumeSpecName "kube-api-access-ldmb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.589819 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9df70013-9402-4d3f-b79f-9cdbe05a575a" (UID: "9df70013-9402-4d3f-b79f-9cdbe05a575a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.603008 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9df70013-9402-4d3f-b79f-9cdbe05a575a" (UID: "9df70013-9402-4d3f-b79f-9cdbe05a575a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.617581 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-config" (OuterVolumeSpecName: "config") pod "9df70013-9402-4d3f-b79f-9cdbe05a575a" (UID: "9df70013-9402-4d3f-b79f-9cdbe05a575a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.623345 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9df70013-9402-4d3f-b79f-9cdbe05a575a" (UID: "9df70013-9402-4d3f-b79f-9cdbe05a575a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.645905 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9df70013-9402-4d3f-b79f-9cdbe05a575a" (UID: "9df70013-9402-4d3f-b79f-9cdbe05a575a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.647052 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.647108 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldmb7\" (UniqueName: \"kubernetes.io/projected/9df70013-9402-4d3f-b79f-9cdbe05a575a-kube-api-access-ldmb7\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.647123 4717 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.647132 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.647141 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.647151 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df70013-9402-4d3f-b79f-9cdbe05a575a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.719959 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f85b87657-6chdr"] Feb 18 12:06:47 crc kubenswrapper[4717]: W0218 12:06:47.736018 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod119c98e2_712a_4f01_b7e4_756b44e3968c.slice/crio-eb11f8da36b38399ef3668ca0c3ea8b886216f33d12b410bd65def181f819f17 WatchSource:0}: Error finding container eb11f8da36b38399ef3668ca0c3ea8b886216f33d12b410bd65def181f819f17: Status 404 returned error can't find the container with id eb11f8da36b38399ef3668ca0c3ea8b886216f33d12b410bd65def181f819f17 Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.882750 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.886379 4717 generic.go:334] "Generic (PLEG): container finished" podID="846dd5ee-5395-4934-8d9f-c39bbc189e49" containerID="50f0439b488ea5973ca1ddc915bdf1647790ee6446f86acfd0062f62a15f4f27" exitCode=0 Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.886588 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" event={"ID":"846dd5ee-5395-4934-8d9f-c39bbc189e49","Type":"ContainerDied","Data":"50f0439b488ea5973ca1ddc915bdf1647790ee6446f86acfd0062f62a15f4f27"} Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.892017 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" event={"ID":"9df70013-9402-4d3f-b79f-9cdbe05a575a","Type":"ContainerDied","Data":"6ff8f062df8b8e069d5b7c1af07309fb54c3c3d19ce12b3baf1ff65f6039a1bc"} Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.892062 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-9xbgj" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.892111 4717 scope.go:117] "RemoveContainer" containerID="1aa64774979934f7524d6a7618caecb116b5d3f4245a90cee5ad5fa07ba59bb9" Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.893585 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70544872-46d3-4011-8571-db370250bafc","Type":"ContainerStarted","Data":"19b96b60792791b77ae1b5d369ab4d2b4dddd1ea9b3c4ae4d3d8772796f6f8d6"} Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.900195 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-77mb9" event={"ID":"c7893e53-3622-4455-8eb8-459235541b6a","Type":"ContainerStarted","Data":"07d30e4c9eded45add94f59d8c81ef3846f9a2962ab1a8a899fd7e811abb205d"} Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.900278 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-77mb9" event={"ID":"c7893e53-3622-4455-8eb8-459235541b6a","Type":"ContainerStarted","Data":"334eaa8fa8b2ccdf115e5670afdeb886df16ddca163aef6ea4f2848e641182d1"} Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.906426 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f85b87657-6chdr" event={"ID":"119c98e2-712a-4f01-b7e4-756b44e3968c","Type":"ContainerStarted","Data":"eb11f8da36b38399ef3668ca0c3ea8b886216f33d12b410bd65def181f819f17"} Feb 18 12:06:47 crc kubenswrapper[4717]: I0218 12:06:47.960079 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-77mb9" podStartSLOduration=3.960029517 podStartE2EDuration="3.960029517s" podCreationTimestamp="2026-02-18 12:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:47.932563469 +0000 UTC m=+1042.334664795" watchObservedRunningTime="2026-02-18 12:06:47.960029517 +0000 UTC m=+1042.362130833" Feb 18 12:06:48 crc kubenswrapper[4717]: I0218 12:06:48.055369 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-9xbgj"] Feb 18 12:06:48 crc kubenswrapper[4717]: I0218 12:06:48.073141 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-9xbgj"] Feb 18 12:06:48 crc kubenswrapper[4717]: I0218 12:06:48.936178 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8e344f46-ef50-46f9-9e95-3955e29e7192","Type":"ContainerStarted","Data":"bbf2d1479b6a1d8e66c090235bed09fc01655567c7ee3569961286b3faf6a69e"} Feb 18 12:06:48 crc kubenswrapper[4717]: I0218 12:06:48.974905 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70544872-46d3-4011-8571-db370250bafc","Type":"ContainerStarted","Data":"991a3e0f969d8d6707d10970b0073ee882215603a867b81a78e2b3d328cad571"} Feb 18 12:06:49 crc kubenswrapper[4717]: I0218 12:06:49.068854 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df70013-9402-4d3f-b79f-9cdbe05a575a" path="/var/lib/kubelet/pods/9df70013-9402-4d3f-b79f-9cdbe05a575a/volumes" Feb 18 12:06:50 crc kubenswrapper[4717]: I0218 12:06:50.003277 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" event={"ID":"846dd5ee-5395-4934-8d9f-c39bbc189e49","Type":"ContainerStarted","Data":"5a4f631ba1a0dd452140138004513cb5ff0aece5933b785161a45bce7c744729"} Feb 18 12:06:50 crc kubenswrapper[4717]: I0218 12:06:50.003722 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:50 crc kubenswrapper[4717]: I0218 12:06:50.029287 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" podStartSLOduration=6.029267594 podStartE2EDuration="6.029267594s" podCreationTimestamp="2026-02-18 12:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:50.025144994 +0000 UTC m=+1044.427246320" watchObservedRunningTime="2026-02-18 12:06:50.029267594 +0000 UTC m=+1044.431368910" Feb 18 12:06:51 crc kubenswrapper[4717]: I0218 12:06:51.045042 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="70544872-46d3-4011-8571-db370250bafc" containerName="glance-log" containerID="cri-o://991a3e0f969d8d6707d10970b0073ee882215603a867b81a78e2b3d328cad571" gracePeriod=30 Feb 18 12:06:51 crc kubenswrapper[4717]: I0218 12:06:51.045708 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="70544872-46d3-4011-8571-db370250bafc" containerName="glance-httpd" containerID="cri-o://0afddaa6738a2a5618a9ce80aeb2080d726d8d9e75c0915716a564024b54a168" gracePeriod=30 Feb 18 12:06:51 crc kubenswrapper[4717]: I0218 12:06:51.051527 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8e344f46-ef50-46f9-9e95-3955e29e7192","Type":"ContainerStarted","Data":"61635cb54347df9d5a3919e2cf17133dd0b963bb25ddab5a1e37eed371194dde"} Feb 18 12:06:51 crc kubenswrapper[4717]: I0218 12:06:51.051584 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70544872-46d3-4011-8571-db370250bafc","Type":"ContainerStarted","Data":"0afddaa6738a2a5618a9ce80aeb2080d726d8d9e75c0915716a564024b54a168"} Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.060380 4717 generic.go:334] "Generic (PLEG): container finished" podID="70544872-46d3-4011-8571-db370250bafc" containerID="0afddaa6738a2a5618a9ce80aeb2080d726d8d9e75c0915716a564024b54a168" exitCode=0 Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.060804 4717 generic.go:334] "Generic (PLEG): container finished" podID="70544872-46d3-4011-8571-db370250bafc" containerID="991a3e0f969d8d6707d10970b0073ee882215603a867b81a78e2b3d328cad571" exitCode=143 Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.060832 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70544872-46d3-4011-8571-db370250bafc","Type":"ContainerDied","Data":"0afddaa6738a2a5618a9ce80aeb2080d726d8d9e75c0915716a564024b54a168"} Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.060866 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70544872-46d3-4011-8571-db370250bafc","Type":"ContainerDied","Data":"991a3e0f969d8d6707d10970b0073ee882215603a867b81a78e2b3d328cad571"} Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.671369 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.671331962 podStartE2EDuration="8.671331962s" podCreationTimestamp="2026-02-18 12:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:06:51.078606339 +0000 UTC m=+1045.480707655" watchObservedRunningTime="2026-02-18 12:06:52.671331962 +0000 UTC m=+1047.073433278" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.676792 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b69c68f97-t72j4"] Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.733972 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b5f4c76fb-t68w8"] Feb 18 12:06:52 crc kubenswrapper[4717]: E0218 12:06:52.735399 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df70013-9402-4d3f-b79f-9cdbe05a575a" containerName="init" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.735429 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df70013-9402-4d3f-b79f-9cdbe05a575a" containerName="init" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.735955 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df70013-9402-4d3f-b79f-9cdbe05a575a" containerName="init" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.776930 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.798847 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.824961 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b5f4c76fb-t68w8"] Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.837850 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-tls-certs\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.837979 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-secret-key\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.838084 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txzxr\" (UniqueName: \"kubernetes.io/projected/619d0b9d-837a-4790-88cd-d2e11c6da6fc-kube-api-access-txzxr\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.838115 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-combined-ca-bundle\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.838274 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-scripts\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.838357 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-config-data\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.838418 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/619d0b9d-837a-4790-88cd-d2e11c6da6fc-logs\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.922672 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f85b87657-6chdr"] Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.926500 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d949b4564-9ns6m"] Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.928681 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.935347 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d949b4564-9ns6m"] Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.943895 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-tls-certs\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.943959 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-secret-key\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.944006 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txzxr\" (UniqueName: \"kubernetes.io/projected/619d0b9d-837a-4790-88cd-d2e11c6da6fc-kube-api-access-txzxr\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.944032 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-combined-ca-bundle\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.944095 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-scripts\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.944144 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-config-data\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.944173 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/619d0b9d-837a-4790-88cd-d2e11c6da6fc-logs\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.945172 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/619d0b9d-837a-4790-88cd-d2e11c6da6fc-logs\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.948117 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-scripts\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.948623 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-config-data\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.973133 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-secret-key\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.973719 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-combined-ca-bundle\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.978822 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-tls-certs\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:52 crc kubenswrapper[4717]: I0218 12:06:52.984687 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txzxr\" (UniqueName: \"kubernetes.io/projected/619d0b9d-837a-4790-88cd-d2e11c6da6fc-kube-api-access-txzxr\") pod \"horizon-6b5f4c76fb-t68w8\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.046272 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72d097e6-a40b-4e3e-8376-f3866f63e9d3-config-data\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.046343 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d097e6-a40b-4e3e-8376-f3866f63e9d3-horizon-tls-certs\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.046376 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/72d097e6-a40b-4e3e-8376-f3866f63e9d3-horizon-secret-key\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.046416 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d097e6-a40b-4e3e-8376-f3866f63e9d3-logs\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.046477 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d097e6-a40b-4e3e-8376-f3866f63e9d3-combined-ca-bundle\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.046500 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7svlt\" (UniqueName: \"kubernetes.io/projected/72d097e6-a40b-4e3e-8376-f3866f63e9d3-kube-api-access-7svlt\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.046531 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72d097e6-a40b-4e3e-8376-f3866f63e9d3-scripts\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.098606 4717 generic.go:334] "Generic (PLEG): container finished" podID="135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" containerID="bf9905cd8cf1d1b69a90269374cd6e6f0fa0a3fb556a86ffe9312a10ef3fcb65" exitCode=0 Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.098686 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s9f7t" event={"ID":"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df","Type":"ContainerDied","Data":"bf9905cd8cf1d1b69a90269374cd6e6f0fa0a3fb556a86ffe9312a10ef3fcb65"} Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.148298 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72d097e6-a40b-4e3e-8376-f3866f63e9d3-config-data\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.148370 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d097e6-a40b-4e3e-8376-f3866f63e9d3-horizon-tls-certs\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.148416 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/72d097e6-a40b-4e3e-8376-f3866f63e9d3-horizon-secret-key\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.148471 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d097e6-a40b-4e3e-8376-f3866f63e9d3-logs\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.148517 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d097e6-a40b-4e3e-8376-f3866f63e9d3-combined-ca-bundle\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.148559 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7svlt\" (UniqueName: \"kubernetes.io/projected/72d097e6-a40b-4e3e-8376-f3866f63e9d3-kube-api-access-7svlt\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.148604 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72d097e6-a40b-4e3e-8376-f3866f63e9d3-scripts\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.150645 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/72d097e6-a40b-4e3e-8376-f3866f63e9d3-scripts\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.151052 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d097e6-a40b-4e3e-8376-f3866f63e9d3-logs\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.151874 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72d097e6-a40b-4e3e-8376-f3866f63e9d3-config-data\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.155787 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d097e6-a40b-4e3e-8376-f3866f63e9d3-horizon-tls-certs\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.156212 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/72d097e6-a40b-4e3e-8376-f3866f63e9d3-horizon-secret-key\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.157543 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d097e6-a40b-4e3e-8376-f3866f63e9d3-combined-ca-bundle\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.175886 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7svlt\" (UniqueName: \"kubernetes.io/projected/72d097e6-a40b-4e3e-8376-f3866f63e9d3-kube-api-access-7svlt\") pod \"horizon-d949b4564-9ns6m\" (UID: \"72d097e6-a40b-4e3e-8376-f3866f63e9d3\") " pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.204937 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:06:53 crc kubenswrapper[4717]: I0218 12:06:53.399041 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:06:55 crc kubenswrapper[4717]: I0218 12:06:55.159790 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:06:55 crc kubenswrapper[4717]: I0218 12:06:55.229920 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rsnmh"] Feb 18 12:06:55 crc kubenswrapper[4717]: I0218 12:06:55.230324 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" containerID="cri-o://dfb1552549d5f55e7322bd8da54d9317e03e970bcf48d97bc275e99b52fa2201" gracePeriod=10 Feb 18 12:06:55 crc kubenswrapper[4717]: I0218 12:06:55.880205 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 18 12:06:56 crc kubenswrapper[4717]: I0218 12:06:56.146367 4717 generic.go:334] "Generic (PLEG): container finished" podID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerID="dfb1552549d5f55e7322bd8da54d9317e03e970bcf48d97bc275e99b52fa2201" exitCode=0 Feb 18 12:06:56 crc kubenswrapper[4717]: I0218 12:06:56.146440 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" event={"ID":"7dd55c20-99bd-40db-add9-cea3d9b7221f","Type":"ContainerDied","Data":"dfb1552549d5f55e7322bd8da54d9317e03e970bcf48d97bc275e99b52fa2201"} Feb 18 12:07:00 crc kubenswrapper[4717]: I0218 12:07:00.880387 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 18 12:07:03 crc kubenswrapper[4717]: I0218 12:07:03.249940 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8e344f46-ef50-46f9-9e95-3955e29e7192","Type":"ContainerStarted","Data":"81c34a3f666bd5d2bba18a8a5d3683786dc9fbffeeecfc56c7f4988b9395bb7e"} Feb 18 12:07:03 crc kubenswrapper[4717]: I0218 12:07:03.250132 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8e344f46-ef50-46f9-9e95-3955e29e7192" containerName="glance-log" containerID="cri-o://61635cb54347df9d5a3919e2cf17133dd0b963bb25ddab5a1e37eed371194dde" gracePeriod=30 Feb 18 12:07:03 crc kubenswrapper[4717]: I0218 12:07:03.250193 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8e344f46-ef50-46f9-9e95-3955e29e7192" containerName="glance-httpd" containerID="cri-o://81c34a3f666bd5d2bba18a8a5d3683786dc9fbffeeecfc56c7f4988b9395bb7e" gracePeriod=30 Feb 18 12:07:03 crc kubenswrapper[4717]: I0218 12:07:03.296644 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.296619641 podStartE2EDuration="19.296619641s" podCreationTimestamp="2026-02-18 12:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:03.287793865 +0000 UTC m=+1057.689895181" watchObservedRunningTime="2026-02-18 12:07:03.296619641 +0000 UTC m=+1057.698720957" Feb 18 12:07:04 crc kubenswrapper[4717]: I0218 12:07:04.263120 4717 generic.go:334] "Generic (PLEG): container finished" podID="8e344f46-ef50-46f9-9e95-3955e29e7192" containerID="81c34a3f666bd5d2bba18a8a5d3683786dc9fbffeeecfc56c7f4988b9395bb7e" exitCode=0 Feb 18 12:07:04 crc kubenswrapper[4717]: I0218 12:07:04.263611 4717 generic.go:334] "Generic (PLEG): container finished" podID="8e344f46-ef50-46f9-9e95-3955e29e7192" containerID="61635cb54347df9d5a3919e2cf17133dd0b963bb25ddab5a1e37eed371194dde" exitCode=143 Feb 18 12:07:04 crc kubenswrapper[4717]: I0218 12:07:04.263218 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8e344f46-ef50-46f9-9e95-3955e29e7192","Type":"ContainerDied","Data":"81c34a3f666bd5d2bba18a8a5d3683786dc9fbffeeecfc56c7f4988b9395bb7e"} Feb 18 12:07:04 crc kubenswrapper[4717]: I0218 12:07:04.263682 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8e344f46-ef50-46f9-9e95-3955e29e7192","Type":"ContainerDied","Data":"61635cb54347df9d5a3919e2cf17133dd0b963bb25ddab5a1e37eed371194dde"} Feb 18 12:07:05 crc kubenswrapper[4717]: E0218 12:07:05.752736 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 18 12:07:05 crc kubenswrapper[4717]: E0218 12:07:05.753928 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lz9d9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-mlr26_openstack(608ad4dc-1e71-408a-9aea-015949cf9aff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:07:05 crc kubenswrapper[4717]: E0218 12:07:05.755148 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-mlr26" podUID="608ad4dc-1e71-408a-9aea-015949cf9aff" Feb 18 12:07:05 crc kubenswrapper[4717]: I0218 12:07:05.880212 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 18 12:07:05 crc kubenswrapper[4717]: I0218 12:07:05.880368 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:07:06 crc kubenswrapper[4717]: E0218 12:07:06.291198 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-mlr26" podUID="608ad4dc-1e71-408a-9aea-015949cf9aff" Feb 18 12:07:07 crc kubenswrapper[4717]: E0218 12:07:07.421306 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 12:07:07 crc kubenswrapper[4717]: E0218 12:07:07.421894 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67ch559h65h59fh6bh5f9h5d9h58fh95h64fh88h9dh686h5fbh5b9h667h55dh5fh688h65h5bh5c5hd8h6chbhb7h5cch596h546hc9hfbh56fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcngd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f85b87657-6chdr_openstack(119c98e2-712a-4f01-b7e4-756b44e3968c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:07:07 crc kubenswrapper[4717]: E0218 12:07:07.427817 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-f85b87657-6chdr" podUID="119c98e2-712a-4f01-b7e4-756b44e3968c" Feb 18 12:07:07 crc kubenswrapper[4717]: E0218 12:07:07.447942 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 12:07:07 crc kubenswrapper[4717]: E0218 12:07:07.448135 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6bh5fch679hd6hc7h655h5b4h676hf6hf5hb4h5dch569h665h9fh58bh5bbh5dfh6ch694h79hb5h5b9h655h65ch568h5ddh7bh6bh5b8hbfh674q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpf6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-698cfb4c87-mhbhr_openstack(7ad902d2-747a-4907-9d90-fbbfbfb96ce5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:07:07 crc kubenswrapper[4717]: E0218 12:07:07.460008 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-698cfb4c87-mhbhr" podUID="7ad902d2-747a-4907-9d90-fbbfbfb96ce5" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.520603 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.597232 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-fernet-keys\") pod \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.597511 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqw44\" (UniqueName: \"kubernetes.io/projected/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-kube-api-access-kqw44\") pod \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.597595 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-scripts\") pod \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.597618 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-credential-keys\") pod \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.597755 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-config-data\") pod \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.597875 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-combined-ca-bundle\") pod \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\" (UID: \"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df\") " Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.608001 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-kube-api-access-kqw44" (OuterVolumeSpecName: "kube-api-access-kqw44") pod "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" (UID: "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df"). InnerVolumeSpecName "kube-api-access-kqw44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.610784 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-scripts" (OuterVolumeSpecName: "scripts") pod "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" (UID: "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.613447 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" (UID: "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.619754 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" (UID: "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.643417 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-config-data" (OuterVolumeSpecName: "config-data") pod "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" (UID: "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.643491 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" (UID: "135500b9-8e5d-4e1b-a64b-a8dd96e9f6df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.712720 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.712770 4717 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.712786 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.712797 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.712809 4717 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:07 crc kubenswrapper[4717]: I0218 12:07:07.712822 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqw44\" (UniqueName: \"kubernetes.io/projected/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df-kube-api-access-kqw44\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.308583 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s9f7t" event={"ID":"135500b9-8e5d-4e1b-a64b-a8dd96e9f6df","Type":"ContainerDied","Data":"ed94e52749659affbbd48896ab08a1c3851e8cbb225bfe1f61652bf823afd482"} Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.309095 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed94e52749659affbbd48896ab08a1c3851e8cbb225bfe1f61652bf823afd482" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.308662 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s9f7t" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.643672 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-s9f7t"] Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.650650 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-s9f7t"] Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.725750 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4q9xn"] Feb 18 12:07:08 crc kubenswrapper[4717]: E0218 12:07:08.726244 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" containerName="keystone-bootstrap" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.726278 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" containerName="keystone-bootstrap" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.726489 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" containerName="keystone-bootstrap" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.727152 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.731628 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.731866 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.732150 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-djr66" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.732281 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.732409 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.747830 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4q9xn"] Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.855541 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-scripts\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.855705 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-config-data\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.855860 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-credential-keys\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.855891 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-fernet-keys\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.855960 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-combined-ca-bundle\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.856084 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzzm2\" (UniqueName: \"kubernetes.io/projected/e2729c19-de90-453d-b744-10b50c11a28b-kube-api-access-qzzm2\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.958288 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-scripts\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.958374 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-config-data\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.958478 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-credential-keys\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.958516 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-fernet-keys\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.958691 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-combined-ca-bundle\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.958773 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzzm2\" (UniqueName: \"kubernetes.io/projected/e2729c19-de90-453d-b744-10b50c11a28b-kube-api-access-qzzm2\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.967222 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-combined-ca-bundle\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.967353 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-config-data\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.967604 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-scripts\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.967980 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-credential-keys\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.969796 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-fernet-keys\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:08 crc kubenswrapper[4717]: I0218 12:07:08.978044 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzzm2\" (UniqueName: \"kubernetes.io/projected/e2729c19-de90-453d-b744-10b50c11a28b-kube-api-access-qzzm2\") pod \"keystone-bootstrap-4q9xn\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:09 crc kubenswrapper[4717]: I0218 12:07:09.052389 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135500b9-8e5d-4e1b-a64b-a8dd96e9f6df" path="/var/lib/kubelet/pods/135500b9-8e5d-4e1b-a64b-a8dd96e9f6df/volumes" Feb 18 12:07:09 crc kubenswrapper[4717]: I0218 12:07:09.056908 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:12 crc kubenswrapper[4717]: I0218 12:07:12.772812 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:07:12 crc kubenswrapper[4717]: I0218 12:07:12.773306 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:07:12 crc kubenswrapper[4717]: I0218 12:07:12.773371 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:07:12 crc kubenswrapper[4717]: I0218 12:07:12.774274 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed34df06cd7a6fd0e8376737775379dc22fdfd46578a3bf716f5810ce3e16a5f"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:07:12 crc kubenswrapper[4717]: I0218 12:07:12.774341 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://ed34df06cd7a6fd0e8376737775379dc22fdfd46578a3bf716f5810ce3e16a5f" gracePeriod=600 Feb 18 12:07:12 crc kubenswrapper[4717]: E0218 12:07:12.963291 4717 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod823580ef_975b_4298_955b_fb3c0b5fefc3.slice/crio-ed34df06cd7a6fd0e8376737775379dc22fdfd46578a3bf716f5810ce3e16a5f.scope\": RecentStats: unable to find data in memory cache]" Feb 18 12:07:14 crc kubenswrapper[4717]: I0218 12:07:14.383659 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="ed34df06cd7a6fd0e8376737775379dc22fdfd46578a3bf716f5810ce3e16a5f" exitCode=0 Feb 18 12:07:14 crc kubenswrapper[4717]: I0218 12:07:14.383741 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"ed34df06cd7a6fd0e8376737775379dc22fdfd46578a3bf716f5810ce3e16a5f"} Feb 18 12:07:14 crc kubenswrapper[4717]: I0218 12:07:14.383810 4717 scope.go:117] "RemoveContainer" containerID="11a1c75eda22e757819ca65e0602c28b288a19c1473f7585c0555728262bdcd4" Feb 18 12:07:15 crc kubenswrapper[4717]: I0218 12:07:15.503877 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 12:07:15 crc kubenswrapper[4717]: I0218 12:07:15.505451 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 12:07:15 crc kubenswrapper[4717]: I0218 12:07:15.677881 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:15 crc kubenswrapper[4717]: I0218 12:07:15.677947 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:15 crc kubenswrapper[4717]: I0218 12:07:15.888887 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.407889 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.452422 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70544872-46d3-4011-8571-db370250bafc","Type":"ContainerDied","Data":"19b96b60792791b77ae1b5d369ab4d2b4dddd1ea9b3c4ae4d3d8772796f6f8d6"} Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.452896 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.511113 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-scripts\") pod \"70544872-46d3-4011-8571-db370250bafc\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.511282 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"70544872-46d3-4011-8571-db370250bafc\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.511353 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngb8j\" (UniqueName: \"kubernetes.io/projected/70544872-46d3-4011-8571-db370250bafc-kube-api-access-ngb8j\") pod \"70544872-46d3-4011-8571-db370250bafc\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.511440 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-combined-ca-bundle\") pod \"70544872-46d3-4011-8571-db370250bafc\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.511499 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-internal-tls-certs\") pod \"70544872-46d3-4011-8571-db370250bafc\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.511575 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-httpd-run\") pod \"70544872-46d3-4011-8571-db370250bafc\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.511686 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-logs\") pod \"70544872-46d3-4011-8571-db370250bafc\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.511712 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-config-data\") pod \"70544872-46d3-4011-8571-db370250bafc\" (UID: \"70544872-46d3-4011-8571-db370250bafc\") " Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.512603 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "70544872-46d3-4011-8571-db370250bafc" (UID: "70544872-46d3-4011-8571-db370250bafc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.514938 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-logs" (OuterVolumeSpecName: "logs") pod "70544872-46d3-4011-8571-db370250bafc" (UID: "70544872-46d3-4011-8571-db370250bafc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.519689 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-scripts" (OuterVolumeSpecName: "scripts") pod "70544872-46d3-4011-8571-db370250bafc" (UID: "70544872-46d3-4011-8571-db370250bafc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.520949 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70544872-46d3-4011-8571-db370250bafc-kube-api-access-ngb8j" (OuterVolumeSpecName: "kube-api-access-ngb8j") pod "70544872-46d3-4011-8571-db370250bafc" (UID: "70544872-46d3-4011-8571-db370250bafc"). InnerVolumeSpecName "kube-api-access-ngb8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.520960 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "70544872-46d3-4011-8571-db370250bafc" (UID: "70544872-46d3-4011-8571-db370250bafc"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.546890 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70544872-46d3-4011-8571-db370250bafc" (UID: "70544872-46d3-4011-8571-db370250bafc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.570803 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "70544872-46d3-4011-8571-db370250bafc" (UID: "70544872-46d3-4011-8571-db370250bafc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.574412 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-config-data" (OuterVolumeSpecName: "config-data") pod "70544872-46d3-4011-8571-db370250bafc" (UID: "70544872-46d3-4011-8571-db370250bafc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.613791 4717 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.613828 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70544872-46d3-4011-8571-db370250bafc-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.613838 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.613845 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.613884 4717 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.613894 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngb8j\" (UniqueName: \"kubernetes.io/projected/70544872-46d3-4011-8571-db370250bafc-kube-api-access-ngb8j\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.613905 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.613913 4717 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70544872-46d3-4011-8571-db370250bafc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.632400 4717 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.716216 4717 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.790710 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.799475 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.828177 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:07:20 crc kubenswrapper[4717]: E0218 12:07:20.829316 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70544872-46d3-4011-8571-db370250bafc" containerName="glance-httpd" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.829427 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="70544872-46d3-4011-8571-db370250bafc" containerName="glance-httpd" Feb 18 12:07:20 crc kubenswrapper[4717]: E0218 12:07:20.829611 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70544872-46d3-4011-8571-db370250bafc" containerName="glance-log" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.829691 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="70544872-46d3-4011-8571-db370250bafc" containerName="glance-log" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.829979 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="70544872-46d3-4011-8571-db370250bafc" containerName="glance-log" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.830103 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="70544872-46d3-4011-8571-db370250bafc" containerName="glance-httpd" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.831564 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.834129 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.834564 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.853640 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.889381 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.922968 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.923517 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkjwt\" (UniqueName: \"kubernetes.io/projected/e76e98e5-47e8-4c4c-ab97-f37cad99c313-kube-api-access-tkjwt\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.923577 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.923610 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-logs\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.923867 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.923942 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.924020 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:20 crc kubenswrapper[4717]: I0218 12:07:20.924074 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.026724 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.026846 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkjwt\" (UniqueName: \"kubernetes.io/projected/e76e98e5-47e8-4c4c-ab97-f37cad99c313-kube-api-access-tkjwt\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.026911 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.026954 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-logs\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.027030 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.027060 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.027093 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.027120 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.028526 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-logs\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.029037 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.029566 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.034531 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.034591 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.034707 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.035137 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.053527 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkjwt\" (UniqueName: \"kubernetes.io/projected/e76e98e5-47e8-4c4c-ab97-f37cad99c313-kube-api-access-tkjwt\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.056502 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70544872-46d3-4011-8571-db370250bafc" path="/var/lib/kubelet/pods/70544872-46d3-4011-8571-db370250bafc/volumes" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.056715 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:07:21 crc kubenswrapper[4717]: I0218 12:07:21.155164 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:25 crc kubenswrapper[4717]: I0218 12:07:25.890602 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 18 12:07:28 crc kubenswrapper[4717]: E0218 12:07:28.244668 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 18 12:07:28 crc kubenswrapper[4717]: E0218 12:07:28.246772 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n656h547h667h85h548hfch66dh65bh665h698h564h5cdhb9h575h55bh5cch5d4hcbh659h56ch559h5c5h5fbhbh59ch97h697h5c9h64bh567hfch558q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8x7f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce4f631a-18d2-46ea-b1e1-17b26808f94d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.368711 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.373381 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.385311 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.419680 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-scripts\") pod \"119c98e2-712a-4f01-b7e4-756b44e3968c\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.419743 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/119c98e2-712a-4f01-b7e4-756b44e3968c-horizon-secret-key\") pod \"119c98e2-712a-4f01-b7e4-756b44e3968c\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.419775 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-nb\") pod \"7dd55c20-99bd-40db-add9-cea3d9b7221f\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.419868 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-svc\") pod \"7dd55c20-99bd-40db-add9-cea3d9b7221f\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.419906 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/119c98e2-712a-4f01-b7e4-756b44e3968c-logs\") pod \"119c98e2-712a-4f01-b7e4-756b44e3968c\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.419932 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-scripts\") pod \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.419977 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcngd\" (UniqueName: \"kubernetes.io/projected/119c98e2-712a-4f01-b7e4-756b44e3968c-kube-api-access-lcngd\") pod \"119c98e2-712a-4f01-b7e4-756b44e3968c\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.420023 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-swift-storage-0\") pod \"7dd55c20-99bd-40db-add9-cea3d9b7221f\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.420052 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-config\") pod \"7dd55c20-99bd-40db-add9-cea3d9b7221f\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.420078 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpf6d\" (UniqueName: \"kubernetes.io/projected/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-kube-api-access-zpf6d\") pod \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.420101 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-config-data\") pod \"119c98e2-712a-4f01-b7e4-756b44e3968c\" (UID: \"119c98e2-712a-4f01-b7e4-756b44e3968c\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.420121 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-logs\") pod \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.420158 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-horizon-secret-key\") pod \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.420196 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vdxh\" (UniqueName: \"kubernetes.io/projected/7dd55c20-99bd-40db-add9-cea3d9b7221f-kube-api-access-4vdxh\") pod \"7dd55c20-99bd-40db-add9-cea3d9b7221f\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.420215 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-config-data\") pod \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\" (UID: \"7ad902d2-747a-4907-9d90-fbbfbfb96ce5\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.420240 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-sb\") pod \"7dd55c20-99bd-40db-add9-cea3d9b7221f\" (UID: \"7dd55c20-99bd-40db-add9-cea3d9b7221f\") " Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.424753 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-logs" (OuterVolumeSpecName: "logs") pod "7ad902d2-747a-4907-9d90-fbbfbfb96ce5" (UID: "7ad902d2-747a-4907-9d90-fbbfbfb96ce5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.425078 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-scripts" (OuterVolumeSpecName: "scripts") pod "7ad902d2-747a-4907-9d90-fbbfbfb96ce5" (UID: "7ad902d2-747a-4907-9d90-fbbfbfb96ce5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.425491 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/119c98e2-712a-4f01-b7e4-756b44e3968c-logs" (OuterVolumeSpecName: "logs") pod "119c98e2-712a-4f01-b7e4-756b44e3968c" (UID: "119c98e2-712a-4f01-b7e4-756b44e3968c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.432478 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7ad902d2-747a-4907-9d90-fbbfbfb96ce5" (UID: "7ad902d2-747a-4907-9d90-fbbfbfb96ce5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.433238 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-config-data" (OuterVolumeSpecName: "config-data") pod "7ad902d2-747a-4907-9d90-fbbfbfb96ce5" (UID: "7ad902d2-747a-4907-9d90-fbbfbfb96ce5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.433535 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/119c98e2-712a-4f01-b7e4-756b44e3968c-kube-api-access-lcngd" (OuterVolumeSpecName: "kube-api-access-lcngd") pod "119c98e2-712a-4f01-b7e4-756b44e3968c" (UID: "119c98e2-712a-4f01-b7e4-756b44e3968c"). InnerVolumeSpecName "kube-api-access-lcngd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.436624 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd55c20-99bd-40db-add9-cea3d9b7221f-kube-api-access-4vdxh" (OuterVolumeSpecName: "kube-api-access-4vdxh") pod "7dd55c20-99bd-40db-add9-cea3d9b7221f" (UID: "7dd55c20-99bd-40db-add9-cea3d9b7221f"). InnerVolumeSpecName "kube-api-access-4vdxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.441581 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-config-data" (OuterVolumeSpecName: "config-data") pod "119c98e2-712a-4f01-b7e4-756b44e3968c" (UID: "119c98e2-712a-4f01-b7e4-756b44e3968c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.447308 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-kube-api-access-zpf6d" (OuterVolumeSpecName: "kube-api-access-zpf6d") pod "7ad902d2-747a-4907-9d90-fbbfbfb96ce5" (UID: "7ad902d2-747a-4907-9d90-fbbfbfb96ce5"). InnerVolumeSpecName "kube-api-access-zpf6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.447779 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-scripts" (OuterVolumeSpecName: "scripts") pod "119c98e2-712a-4f01-b7e4-756b44e3968c" (UID: "119c98e2-712a-4f01-b7e4-756b44e3968c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.488639 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119c98e2-712a-4f01-b7e4-756b44e3968c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "119c98e2-712a-4f01-b7e4-756b44e3968c" (UID: "119c98e2-712a-4f01-b7e4-756b44e3968c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523020 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpf6d\" (UniqueName: \"kubernetes.io/projected/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-kube-api-access-zpf6d\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523058 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523073 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523082 4717 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523092 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vdxh\" (UniqueName: \"kubernetes.io/projected/7dd55c20-99bd-40db-add9-cea3d9b7221f-kube-api-access-4vdxh\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523101 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523111 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/119c98e2-712a-4f01-b7e4-756b44e3968c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523120 4717 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/119c98e2-712a-4f01-b7e4-756b44e3968c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523131 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ad902d2-747a-4907-9d90-fbbfbfb96ce5-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523139 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/119c98e2-712a-4f01-b7e4-756b44e3968c-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.523147 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcngd\" (UniqueName: \"kubernetes.io/projected/119c98e2-712a-4f01-b7e4-756b44e3968c-kube-api-access-lcngd\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.529409 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7dd55c20-99bd-40db-add9-cea3d9b7221f" (UID: "7dd55c20-99bd-40db-add9-cea3d9b7221f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.539912 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7dd55c20-99bd-40db-add9-cea3d9b7221f" (UID: "7dd55c20-99bd-40db-add9-cea3d9b7221f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.547478 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7dd55c20-99bd-40db-add9-cea3d9b7221f" (UID: "7dd55c20-99bd-40db-add9-cea3d9b7221f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.557240 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7dd55c20-99bd-40db-add9-cea3d9b7221f" (UID: "7dd55c20-99bd-40db-add9-cea3d9b7221f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.557691 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-698cfb4c87-mhbhr" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.557830 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-698cfb4c87-mhbhr" event={"ID":"7ad902d2-747a-4907-9d90-fbbfbfb96ce5","Type":"ContainerDied","Data":"86de35f1aed99351f53228a020416aeeb29a96533f369a60b47f03421b4b465e"} Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.557978 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-config" (OuterVolumeSpecName: "config") pod "7dd55c20-99bd-40db-add9-cea3d9b7221f" (UID: "7dd55c20-99bd-40db-add9-cea3d9b7221f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.562987 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f85b87657-6chdr" event={"ID":"119c98e2-712a-4f01-b7e4-756b44e3968c","Type":"ContainerDied","Data":"eb11f8da36b38399ef3668ca0c3ea8b886216f33d12b410bd65def181f819f17"} Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.563206 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f85b87657-6chdr" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.571519 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" event={"ID":"7dd55c20-99bd-40db-add9-cea3d9b7221f","Type":"ContainerDied","Data":"1867dbe90910292dda16eac2cf6b28b59e2dca3f457f55c0216d84dab8885d4f"} Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.571645 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.625165 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.625235 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.625250 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.625279 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.625292 4717 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dd55c20-99bd-40db-add9-cea3d9b7221f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.679433 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rsnmh"] Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.689402 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rsnmh"] Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.715789 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-698cfb4c87-mhbhr"] Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.727211 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-698cfb4c87-mhbhr"] Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.776085 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f85b87657-6chdr"] Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.792670 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f85b87657-6chdr"] Feb 18 12:07:28 crc kubenswrapper[4717]: I0218 12:07:28.801515 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b5f4c76fb-t68w8"] Feb 18 12:07:29 crc kubenswrapper[4717]: I0218 12:07:29.054894 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="119c98e2-712a-4f01-b7e4-756b44e3968c" path="/var/lib/kubelet/pods/119c98e2-712a-4f01-b7e4-756b44e3968c/volumes" Feb 18 12:07:29 crc kubenswrapper[4717]: I0218 12:07:29.055927 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ad902d2-747a-4907-9d90-fbbfbfb96ce5" path="/var/lib/kubelet/pods/7ad902d2-747a-4907-9d90-fbbfbfb96ce5/volumes" Feb 18 12:07:29 crc kubenswrapper[4717]: I0218 12:07:29.056589 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" path="/var/lib/kubelet/pods/7dd55c20-99bd-40db-add9-cea3d9b7221f/volumes" Feb 18 12:07:29 crc kubenswrapper[4717]: E0218 12:07:29.780814 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 12:07:29 crc kubenswrapper[4717]: E0218 12:07:29.781478 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njt9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-pdj8h_openstack(70292f47-9494-42eb-a5ae-041c4bfc01ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:07:29 crc kubenswrapper[4717]: E0218 12:07:29.782745 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-pdj8h" podUID="70292f47-9494-42eb-a5ae-041c4bfc01ea" Feb 18 12:07:29 crc kubenswrapper[4717]: W0218 12:07:29.825980 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod619d0b9d_837a_4790_88cd_d2e11c6da6fc.slice/crio-6650069fbcd3232ee79dd5dea3be34df2b7da25e88c05b904fccdb394159f61f WatchSource:0}: Error finding container 6650069fbcd3232ee79dd5dea3be34df2b7da25e88c05b904fccdb394159f61f: Status 404 returned error can't find the container with id 6650069fbcd3232ee79dd5dea3be34df2b7da25e88c05b904fccdb394159f61f Feb 18 12:07:29 crc kubenswrapper[4717]: I0218 12:07:29.845912 4717 scope.go:117] "RemoveContainer" containerID="0afddaa6738a2a5618a9ce80aeb2080d726d8d9e75c0915716a564024b54a168" Feb 18 12:07:29 crc kubenswrapper[4717]: I0218 12:07:29.978685 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.045791 4717 scope.go:117] "RemoveContainer" containerID="991a3e0f969d8d6707d10970b0073ee882215603a867b81a78e2b3d328cad571" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.063146 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwjhm\" (UniqueName: \"kubernetes.io/projected/8e344f46-ef50-46f9-9e95-3955e29e7192-kube-api-access-wwjhm\") pod \"8e344f46-ef50-46f9-9e95-3955e29e7192\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.063212 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-config-data\") pod \"8e344f46-ef50-46f9-9e95-3955e29e7192\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.063232 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"8e344f46-ef50-46f9-9e95-3955e29e7192\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.063293 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-combined-ca-bundle\") pod \"8e344f46-ef50-46f9-9e95-3955e29e7192\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.063334 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-scripts\") pod \"8e344f46-ef50-46f9-9e95-3955e29e7192\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.063381 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-httpd-run\") pod \"8e344f46-ef50-46f9-9e95-3955e29e7192\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.063531 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-public-tls-certs\") pod \"8e344f46-ef50-46f9-9e95-3955e29e7192\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.063597 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-logs\") pod \"8e344f46-ef50-46f9-9e95-3955e29e7192\" (UID: \"8e344f46-ef50-46f9-9e95-3955e29e7192\") " Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.064575 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8e344f46-ef50-46f9-9e95-3955e29e7192" (UID: "8e344f46-ef50-46f9-9e95-3955e29e7192"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.065660 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-logs" (OuterVolumeSpecName: "logs") pod "8e344f46-ef50-46f9-9e95-3955e29e7192" (UID: "8e344f46-ef50-46f9-9e95-3955e29e7192"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.074617 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e344f46-ef50-46f9-9e95-3955e29e7192-kube-api-access-wwjhm" (OuterVolumeSpecName: "kube-api-access-wwjhm") pod "8e344f46-ef50-46f9-9e95-3955e29e7192" (UID: "8e344f46-ef50-46f9-9e95-3955e29e7192"). InnerVolumeSpecName "kube-api-access-wwjhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.082480 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-scripts" (OuterVolumeSpecName: "scripts") pod "8e344f46-ef50-46f9-9e95-3955e29e7192" (UID: "8e344f46-ef50-46f9-9e95-3955e29e7192"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.083010 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "8e344f46-ef50-46f9-9e95-3955e29e7192" (UID: "8e344f46-ef50-46f9-9e95-3955e29e7192"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.101061 4717 scope.go:117] "RemoveContainer" containerID="dfb1552549d5f55e7322bd8da54d9317e03e970bcf48d97bc275e99b52fa2201" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.160117 4717 scope.go:117] "RemoveContainer" containerID="6c97051ccc0d4f0e1f79144d2256a6dd22c4eb62d92034487f439b13ea475708" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.166431 4717 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.166461 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e344f46-ef50-46f9-9e95-3955e29e7192-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.166474 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwjhm\" (UniqueName: \"kubernetes.io/projected/8e344f46-ef50-46f9-9e95-3955e29e7192-kube-api-access-wwjhm\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.166510 4717 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.166520 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.206164 4717 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.252468 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e344f46-ef50-46f9-9e95-3955e29e7192" (UID: "8e344f46-ef50-46f9-9e95-3955e29e7192"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.262221 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-config-data" (OuterVolumeSpecName: "config-data") pod "8e344f46-ef50-46f9-9e95-3955e29e7192" (UID: "8e344f46-ef50-46f9-9e95-3955e29e7192"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.270078 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.270146 4717 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.270157 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.289165 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8e344f46-ef50-46f9-9e95-3955e29e7192" (UID: "8e344f46-ef50-46f9-9e95-3955e29e7192"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.374756 4717 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e344f46-ef50-46f9-9e95-3955e29e7192-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.430046 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d949b4564-9ns6m"] Feb 18 12:07:30 crc kubenswrapper[4717]: W0218 12:07:30.442113 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72d097e6_a40b_4e3e_8376_f3866f63e9d3.slice/crio-215ed4845f63375bf95d70637580c05e86ab3fbdb452b020397bcc6a73e2a238 WatchSource:0}: Error finding container 215ed4845f63375bf95d70637580c05e86ab3fbdb452b020397bcc6a73e2a238: Status 404 returned error can't find the container with id 215ed4845f63375bf95d70637580c05e86ab3fbdb452b020397bcc6a73e2a238 Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.443000 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4q9xn"] Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.484140 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.510002 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.592210 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wbtkn" event={"ID":"cc0bc5b3-673e-46dc-941a-151096e1831b","Type":"ContainerStarted","Data":"b6d361baaeb1cb58caf866b6899049d95c3e7d9c5fdd95f27715ef2197ee66db"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.599310 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.601564 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8e344f46-ef50-46f9-9e95-3955e29e7192","Type":"ContainerDied","Data":"bbf2d1479b6a1d8e66c090235bed09fc01655567c7ee3569961286b3faf6a69e"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.601648 4717 scope.go:117] "RemoveContainer" containerID="81c34a3f666bd5d2bba18a8a5d3683786dc9fbffeeecfc56c7f4988b9395bb7e" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.605021 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4q9xn" event={"ID":"e2729c19-de90-453d-b744-10b50c11a28b","Type":"ContainerStarted","Data":"6bc03727e3836293bef9bff7b4d8535a79c3b328da58fe2f0a6b4074944d635a"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.607483 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e76e98e5-47e8-4c4c-ab97-f37cad99c313","Type":"ContainerStarted","Data":"d55fc7cd2ea015d827696d45b32d7d3307289d9171f7b1a71f521f894497ba10"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.615846 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wbtkn" podStartSLOduration=4.859625306 podStartE2EDuration="46.615793982s" podCreationTimestamp="2026-02-18 12:06:44 +0000 UTC" firstStartedPulling="2026-02-18 12:06:46.509722947 +0000 UTC m=+1040.911824253" lastFinishedPulling="2026-02-18 12:07:28.265891613 +0000 UTC m=+1082.667992929" observedRunningTime="2026-02-18 12:07:30.610329443 +0000 UTC m=+1085.012430769" watchObservedRunningTime="2026-02-18 12:07:30.615793982 +0000 UTC m=+1085.017895298" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.618945 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mlr26" event={"ID":"608ad4dc-1e71-408a-9aea-015949cf9aff","Type":"ContainerStarted","Data":"e154dd36b950ad82ade8436531d32dafe482b05fb9aa61ed908a352e13eae2ac"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.625294 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d949b4564-9ns6m" event={"ID":"72d097e6-a40b-4e3e-8376-f3866f63e9d3","Type":"ContainerStarted","Data":"215ed4845f63375bf95d70637580c05e86ab3fbdb452b020397bcc6a73e2a238"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.627142 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b69c68f97-t72j4" event={"ID":"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8","Type":"ContainerStarted","Data":"901fc8a39c7615d205a0798f618d24c53a093eafe3f22eada226567a24679f85"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.627220 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b69c68f97-t72j4" event={"ID":"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8","Type":"ContainerStarted","Data":"abd1180d7054dc275a08de443fe1297b7d43f2d6a025de0140724414ed5ff445"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.627453 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b69c68f97-t72j4" podUID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" containerName="horizon-log" containerID="cri-o://abd1180d7054dc275a08de443fe1297b7d43f2d6a025de0140724414ed5ff445" gracePeriod=30 Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.627868 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b69c68f97-t72j4" podUID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" containerName="horizon" containerID="cri-o://901fc8a39c7615d205a0798f618d24c53a093eafe3f22eada226567a24679f85" gracePeriod=30 Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.635080 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"8619a78ac97f793404bae81465977701fb9cf7482a56c2df47cd47e2df2d8754"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.644815 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b5f4c76fb-t68w8" event={"ID":"619d0b9d-837a-4790-88cd-d2e11c6da6fc","Type":"ContainerStarted","Data":"ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.644888 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b5f4c76fb-t68w8" event={"ID":"619d0b9d-837a-4790-88cd-d2e11c6da6fc","Type":"ContainerStarted","Data":"6650069fbcd3232ee79dd5dea3be34df2b7da25e88c05b904fccdb394159f61f"} Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.646491 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mlr26" podStartSLOduration=3.354989266 podStartE2EDuration="46.646467013s" podCreationTimestamp="2026-02-18 12:06:44 +0000 UTC" firstStartedPulling="2026-02-18 12:06:46.580870456 +0000 UTC m=+1040.982971772" lastFinishedPulling="2026-02-18 12:07:29.872348203 +0000 UTC m=+1084.274449519" observedRunningTime="2026-02-18 12:07:30.637189284 +0000 UTC m=+1085.039290600" watchObservedRunningTime="2026-02-18 12:07:30.646467013 +0000 UTC m=+1085.048568329" Feb 18 12:07:30 crc kubenswrapper[4717]: E0218 12:07:30.652592 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-pdj8h" podUID="70292f47-9494-42eb-a5ae-041c4bfc01ea" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.675214 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b69c68f97-t72j4" podStartSLOduration=4.918355182 podStartE2EDuration="46.675176287s" podCreationTimestamp="2026-02-18 12:06:44 +0000 UTC" firstStartedPulling="2026-02-18 12:06:46.509068398 +0000 UTC m=+1040.911169714" lastFinishedPulling="2026-02-18 12:07:28.265889503 +0000 UTC m=+1082.667990819" observedRunningTime="2026-02-18 12:07:30.658915695 +0000 UTC m=+1085.061017021" watchObservedRunningTime="2026-02-18 12:07:30.675176287 +0000 UTC m=+1085.077277603" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.720352 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.732989 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.749331 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:07:30 crc kubenswrapper[4717]: E0218 12:07:30.749854 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.749876 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" Feb 18 12:07:30 crc kubenswrapper[4717]: E0218 12:07:30.749902 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="init" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.749912 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="init" Feb 18 12:07:30 crc kubenswrapper[4717]: E0218 12:07:30.749932 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e344f46-ef50-46f9-9e95-3955e29e7192" containerName="glance-log" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.749942 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e344f46-ef50-46f9-9e95-3955e29e7192" containerName="glance-log" Feb 18 12:07:30 crc kubenswrapper[4717]: E0218 12:07:30.749958 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e344f46-ef50-46f9-9e95-3955e29e7192" containerName="glance-httpd" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.749967 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e344f46-ef50-46f9-9e95-3955e29e7192" containerName="glance-httpd" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.750161 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e344f46-ef50-46f9-9e95-3955e29e7192" containerName="glance-httpd" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.750194 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.750205 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e344f46-ef50-46f9-9e95-3955e29e7192" containerName="glance-log" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.751340 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.755699 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.755699 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.772582 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.792738 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.792817 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.792889 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-scripts\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.792973 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.793002 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-config-data\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.793106 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcht8\" (UniqueName: \"kubernetes.io/projected/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-kube-api-access-vcht8\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.793136 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-logs\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.793187 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.891973 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rsnmh" podUID="7dd55c20-99bd-40db-add9-cea3d9b7221f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.894736 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.894797 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.894849 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-scripts\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.894909 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.894935 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-config-data\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.894992 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcht8\" (UniqueName: \"kubernetes.io/projected/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-kube-api-access-vcht8\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.895024 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-logs\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.895067 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.895409 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.895451 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.895914 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-logs\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.902300 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.907046 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-scripts\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.907089 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-config-data\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.914059 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.916090 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcht8\" (UniqueName: \"kubernetes.io/projected/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-kube-api-access-vcht8\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:30 crc kubenswrapper[4717]: I0218 12:07:30.924225 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " pod="openstack/glance-default-external-api-0" Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.057866 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e344f46-ef50-46f9-9e95-3955e29e7192" path="/var/lib/kubelet/pods/8e344f46-ef50-46f9-9e95-3955e29e7192/volumes" Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.072685 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.249873 4717 scope.go:117] "RemoveContainer" containerID="61635cb54347df9d5a3919e2cf17133dd0b963bb25ddab5a1e37eed371194dde" Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.662127 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce4f631a-18d2-46ea-b1e1-17b26808f94d","Type":"ContainerStarted","Data":"ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a"} Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.667120 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d949b4564-9ns6m" event={"ID":"72d097e6-a40b-4e3e-8376-f3866f63e9d3","Type":"ContainerStarted","Data":"29e2c708462b499c60a5457716142535e9f9b9a8e764510f6b8c946a6d73f60f"} Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.667207 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d949b4564-9ns6m" event={"ID":"72d097e6-a40b-4e3e-8376-f3866f63e9d3","Type":"ContainerStarted","Data":"4f82586ee3c35d2e866c5ff001dc053b687004abf7bb86a1214e8b7ab93f9f9e"} Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.673758 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4q9xn" event={"ID":"e2729c19-de90-453d-b744-10b50c11a28b","Type":"ContainerStarted","Data":"adbb7e4d109a273f3bcf5a09cb092a07f155eb6e064a9c10d3c2becb1ad5f41d"} Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.677103 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b5f4c76fb-t68w8" event={"ID":"619d0b9d-837a-4790-88cd-d2e11c6da6fc","Type":"ContainerStarted","Data":"0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce"} Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.698944 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-d949b4564-9ns6m" podStartSLOduration=39.69890949 podStartE2EDuration="39.69890949s" podCreationTimestamp="2026-02-18 12:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:31.685739127 +0000 UTC m=+1086.087840443" watchObservedRunningTime="2026-02-18 12:07:31.69890949 +0000 UTC m=+1086.101010806" Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.741599 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4q9xn" podStartSLOduration=23.741526508 podStartE2EDuration="23.741526508s" podCreationTimestamp="2026-02-18 12:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:31.739722975 +0000 UTC m=+1086.141824291" watchObservedRunningTime="2026-02-18 12:07:31.741526508 +0000 UTC m=+1086.143627824" Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.747581 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b5f4c76fb-t68w8" podStartSLOduration=39.747547823 podStartE2EDuration="39.747547823s" podCreationTimestamp="2026-02-18 12:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:31.713975057 +0000 UTC m=+1086.116076373" watchObservedRunningTime="2026-02-18 12:07:31.747547823 +0000 UTC m=+1086.149649139" Feb 18 12:07:31 crc kubenswrapper[4717]: I0218 12:07:31.988817 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:07:32 crc kubenswrapper[4717]: I0218 12:07:32.703605 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e76e98e5-47e8-4c4c-ab97-f37cad99c313","Type":"ContainerStarted","Data":"e1a733b2aeb09679d3f67848a41e32376fb45ad684f0da257b6745765f88b5e4"} Feb 18 12:07:32 crc kubenswrapper[4717]: I0218 12:07:32.707011 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b","Type":"ContainerStarted","Data":"85717296d76cd25f87031dff482ad72b0204ae13894f2c191708452800f0ca27"} Feb 18 12:07:33 crc kubenswrapper[4717]: I0218 12:07:33.205029 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:07:33 crc kubenswrapper[4717]: I0218 12:07:33.205530 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:07:33 crc kubenswrapper[4717]: I0218 12:07:33.399485 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:07:33 crc kubenswrapper[4717]: I0218 12:07:33.399592 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:07:33 crc kubenswrapper[4717]: I0218 12:07:33.725445 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e76e98e5-47e8-4c4c-ab97-f37cad99c313","Type":"ContainerStarted","Data":"c321c235cc5eab02f81494487ecb1900249c83c58dccbe824b356ba1a8b845f7"} Feb 18 12:07:33 crc kubenswrapper[4717]: I0218 12:07:33.753312 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b","Type":"ContainerStarted","Data":"f774765cb2ed66c85abf2ebba93415d2a5e3b43cc6860e0646890a4d35442b7d"} Feb 18 12:07:33 crc kubenswrapper[4717]: I0218 12:07:33.799098 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.799074724 podStartE2EDuration="13.799074724s" podCreationTimestamp="2026-02-18 12:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:33.77484941 +0000 UTC m=+1088.176950726" watchObservedRunningTime="2026-02-18 12:07:33.799074724 +0000 UTC m=+1088.201176040" Feb 18 12:07:34 crc kubenswrapper[4717]: I0218 12:07:34.777328 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b","Type":"ContainerStarted","Data":"27637b5563867a9208732eeb4ddb23a11c2ced41d1ca0b40403e2916f1d24c85"} Feb 18 12:07:34 crc kubenswrapper[4717]: I0218 12:07:34.808761 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.808734807 podStartE2EDuration="4.808734807s" podCreationTimestamp="2026-02-18 12:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:34.807581674 +0000 UTC m=+1089.209682990" watchObservedRunningTime="2026-02-18 12:07:34.808734807 +0000 UTC m=+1089.210836123" Feb 18 12:07:35 crc kubenswrapper[4717]: I0218 12:07:35.180342 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:07:35 crc kubenswrapper[4717]: I0218 12:07:35.796439 4717 generic.go:334] "Generic (PLEG): container finished" podID="608ad4dc-1e71-408a-9aea-015949cf9aff" containerID="e154dd36b950ad82ade8436531d32dafe482b05fb9aa61ed908a352e13eae2ac" exitCode=0 Feb 18 12:07:35 crc kubenswrapper[4717]: I0218 12:07:35.796520 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mlr26" event={"ID":"608ad4dc-1e71-408a-9aea-015949cf9aff","Type":"ContainerDied","Data":"e154dd36b950ad82ade8436531d32dafe482b05fb9aa61ed908a352e13eae2ac"} Feb 18 12:07:36 crc kubenswrapper[4717]: I0218 12:07:36.810581 4717 generic.go:334] "Generic (PLEG): container finished" podID="cc0bc5b3-673e-46dc-941a-151096e1831b" containerID="b6d361baaeb1cb58caf866b6899049d95c3e7d9c5fdd95f27715ef2197ee66db" exitCode=0 Feb 18 12:07:36 crc kubenswrapper[4717]: I0218 12:07:36.810735 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wbtkn" event={"ID":"cc0bc5b3-673e-46dc-941a-151096e1831b","Type":"ContainerDied","Data":"b6d361baaeb1cb58caf866b6899049d95c3e7d9c5fdd95f27715ef2197ee66db"} Feb 18 12:07:36 crc kubenswrapper[4717]: I0218 12:07:36.814307 4717 generic.go:334] "Generic (PLEG): container finished" podID="e2729c19-de90-453d-b744-10b50c11a28b" containerID="adbb7e4d109a273f3bcf5a09cb092a07f155eb6e064a9c10d3c2becb1ad5f41d" exitCode=0 Feb 18 12:07:36 crc kubenswrapper[4717]: I0218 12:07:36.814411 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4q9xn" event={"ID":"e2729c19-de90-453d-b744-10b50c11a28b","Type":"ContainerDied","Data":"adbb7e4d109a273f3bcf5a09cb092a07f155eb6e064a9c10d3c2becb1ad5f41d"} Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.313843 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mlr26" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.323483 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.474403 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-combined-ca-bundle\") pod \"cc0bc5b3-673e-46dc-941a-151096e1831b\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.474443 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-config-data\") pod \"608ad4dc-1e71-408a-9aea-015949cf9aff\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.474470 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-scripts\") pod \"608ad4dc-1e71-408a-9aea-015949cf9aff\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.474555 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608ad4dc-1e71-408a-9aea-015949cf9aff-logs\") pod \"608ad4dc-1e71-408a-9aea-015949cf9aff\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.474611 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9d9\" (UniqueName: \"kubernetes.io/projected/608ad4dc-1e71-408a-9aea-015949cf9aff-kube-api-access-lz9d9\") pod \"608ad4dc-1e71-408a-9aea-015949cf9aff\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.474645 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-combined-ca-bundle\") pod \"608ad4dc-1e71-408a-9aea-015949cf9aff\" (UID: \"608ad4dc-1e71-408a-9aea-015949cf9aff\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.474767 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-db-sync-config-data\") pod \"cc0bc5b3-673e-46dc-941a-151096e1831b\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.474799 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr6zv\" (UniqueName: \"kubernetes.io/projected/cc0bc5b3-673e-46dc-941a-151096e1831b-kube-api-access-hr6zv\") pod \"cc0bc5b3-673e-46dc-941a-151096e1831b\" (UID: \"cc0bc5b3-673e-46dc-941a-151096e1831b\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.475715 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608ad4dc-1e71-408a-9aea-015949cf9aff-logs" (OuterVolumeSpecName: "logs") pod "608ad4dc-1e71-408a-9aea-015949cf9aff" (UID: "608ad4dc-1e71-408a-9aea-015949cf9aff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.476300 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608ad4dc-1e71-408a-9aea-015949cf9aff-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.481311 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-scripts" (OuterVolumeSpecName: "scripts") pod "608ad4dc-1e71-408a-9aea-015949cf9aff" (UID: "608ad4dc-1e71-408a-9aea-015949cf9aff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.481511 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608ad4dc-1e71-408a-9aea-015949cf9aff-kube-api-access-lz9d9" (OuterVolumeSpecName: "kube-api-access-lz9d9") pod "608ad4dc-1e71-408a-9aea-015949cf9aff" (UID: "608ad4dc-1e71-408a-9aea-015949cf9aff"). InnerVolumeSpecName "kube-api-access-lz9d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.485918 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cc0bc5b3-673e-46dc-941a-151096e1831b" (UID: "cc0bc5b3-673e-46dc-941a-151096e1831b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.486014 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0bc5b3-673e-46dc-941a-151096e1831b-kube-api-access-hr6zv" (OuterVolumeSpecName: "kube-api-access-hr6zv") pod "cc0bc5b3-673e-46dc-941a-151096e1831b" (UID: "cc0bc5b3-673e-46dc-941a-151096e1831b"). InnerVolumeSpecName "kube-api-access-hr6zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.501215 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.513303 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc0bc5b3-673e-46dc-941a-151096e1831b" (UID: "cc0bc5b3-673e-46dc-941a-151096e1831b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.514689 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-config-data" (OuterVolumeSpecName: "config-data") pod "608ad4dc-1e71-408a-9aea-015949cf9aff" (UID: "608ad4dc-1e71-408a-9aea-015949cf9aff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.549586 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "608ad4dc-1e71-408a-9aea-015949cf9aff" (UID: "608ad4dc-1e71-408a-9aea-015949cf9aff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.579031 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr6zv\" (UniqueName: \"kubernetes.io/projected/cc0bc5b3-673e-46dc-941a-151096e1831b-kube-api-access-hr6zv\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.579090 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.579105 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.579120 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.579134 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9d9\" (UniqueName: \"kubernetes.io/projected/608ad4dc-1e71-408a-9aea-015949cf9aff-kube-api-access-lz9d9\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.579146 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608ad4dc-1e71-408a-9aea-015949cf9aff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.579158 4717 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc0bc5b3-673e-46dc-941a-151096e1831b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.680009 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzzm2\" (UniqueName: \"kubernetes.io/projected/e2729c19-de90-453d-b744-10b50c11a28b-kube-api-access-qzzm2\") pod \"e2729c19-de90-453d-b744-10b50c11a28b\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.680557 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-credential-keys\") pod \"e2729c19-de90-453d-b744-10b50c11a28b\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.680628 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-fernet-keys\") pod \"e2729c19-de90-453d-b744-10b50c11a28b\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.680796 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-combined-ca-bundle\") pod \"e2729c19-de90-453d-b744-10b50c11a28b\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.680850 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-scripts\") pod \"e2729c19-de90-453d-b744-10b50c11a28b\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.681196 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-config-data\") pod \"e2729c19-de90-453d-b744-10b50c11a28b\" (UID: \"e2729c19-de90-453d-b744-10b50c11a28b\") " Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.688281 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2729c19-de90-453d-b744-10b50c11a28b-kube-api-access-qzzm2" (OuterVolumeSpecName: "kube-api-access-qzzm2") pod "e2729c19-de90-453d-b744-10b50c11a28b" (UID: "e2729c19-de90-453d-b744-10b50c11a28b"). InnerVolumeSpecName "kube-api-access-qzzm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.688412 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e2729c19-de90-453d-b744-10b50c11a28b" (UID: "e2729c19-de90-453d-b744-10b50c11a28b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.688560 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-scripts" (OuterVolumeSpecName: "scripts") pod "e2729c19-de90-453d-b744-10b50c11a28b" (UID: "e2729c19-de90-453d-b744-10b50c11a28b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.693490 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e2729c19-de90-453d-b744-10b50c11a28b" (UID: "e2729c19-de90-453d-b744-10b50c11a28b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.727107 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-config-data" (OuterVolumeSpecName: "config-data") pod "e2729c19-de90-453d-b744-10b50c11a28b" (UID: "e2729c19-de90-453d-b744-10b50c11a28b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.731102 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2729c19-de90-453d-b744-10b50c11a28b" (UID: "e2729c19-de90-453d-b744-10b50c11a28b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.787350 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.787422 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.787460 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.787475 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzzm2\" (UniqueName: \"kubernetes.io/projected/e2729c19-de90-453d-b744-10b50c11a28b-kube-api-access-qzzm2\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.787491 4717 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.787504 4717 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2729c19-de90-453d-b744-10b50c11a28b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.852332 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mlr26" event={"ID":"608ad4dc-1e71-408a-9aea-015949cf9aff","Type":"ContainerDied","Data":"de4deb8ef42d41fcd8c7f206a1654b85d21af286f8caddf9cd9c38a77e6308c9"} Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.852394 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de4deb8ef42d41fcd8c7f206a1654b85d21af286f8caddf9cd9c38a77e6308c9" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.852362 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mlr26" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.855107 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce4f631a-18d2-46ea-b1e1-17b26808f94d","Type":"ContainerStarted","Data":"353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937"} Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.856820 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wbtkn" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.856846 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wbtkn" event={"ID":"cc0bc5b3-673e-46dc-941a-151096e1831b","Type":"ContainerDied","Data":"4922a8b517bb91e7d293acff1916ff75cb5550200869cda2677810c35aa5a463"} Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.856895 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4922a8b517bb91e7d293acff1916ff75cb5550200869cda2677810c35aa5a463" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.859396 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4q9xn" event={"ID":"e2729c19-de90-453d-b744-10b50c11a28b","Type":"ContainerDied","Data":"6bc03727e3836293bef9bff7b4d8535a79c3b328da58fe2f0a6b4074944d635a"} Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.859446 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4q9xn" Feb 18 12:07:39 crc kubenswrapper[4717]: I0218 12:07:39.859459 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bc03727e3836293bef9bff7b4d8535a79c3b328da58fe2f0a6b4074944d635a" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.515187 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f47d9d48-pwkwm"] Feb 18 12:07:40 crc kubenswrapper[4717]: E0218 12:07:40.516290 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2729c19-de90-453d-b744-10b50c11a28b" containerName="keystone-bootstrap" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.516310 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2729c19-de90-453d-b744-10b50c11a28b" containerName="keystone-bootstrap" Feb 18 12:07:40 crc kubenswrapper[4717]: E0218 12:07:40.516328 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608ad4dc-1e71-408a-9aea-015949cf9aff" containerName="placement-db-sync" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.516334 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="608ad4dc-1e71-408a-9aea-015949cf9aff" containerName="placement-db-sync" Feb 18 12:07:40 crc kubenswrapper[4717]: E0218 12:07:40.516360 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0bc5b3-673e-46dc-941a-151096e1831b" containerName="barbican-db-sync" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.516370 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0bc5b3-673e-46dc-941a-151096e1831b" containerName="barbican-db-sync" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.516588 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0bc5b3-673e-46dc-941a-151096e1831b" containerName="barbican-db-sync" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.516614 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="608ad4dc-1e71-408a-9aea-015949cf9aff" containerName="placement-db-sync" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.516656 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2729c19-de90-453d-b744-10b50c11a28b" containerName="keystone-bootstrap" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.517763 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.524344 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vtln4" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.524844 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.526661 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.528603 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.534480 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.574390 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f47d9d48-pwkwm"] Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.609053 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49858184-0ffa-49c8-8fd5-5e3935eb70f9-logs\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.609111 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jp99\" (UniqueName: \"kubernetes.io/projected/49858184-0ffa-49c8-8fd5-5e3935eb70f9-kube-api-access-4jp99\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.609142 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-config-data\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.609177 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-public-tls-certs\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.609315 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-combined-ca-bundle\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.609352 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-internal-tls-certs\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.609386 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-scripts\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.699591 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-766b8ffffc-2xlnb"] Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.700773 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.705727 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.706218 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-djr66" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.709364 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.709818 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.715978 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-internal-tls-certs\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.716070 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-scripts\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.716135 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49858184-0ffa-49c8-8fd5-5e3935eb70f9-logs\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.716170 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jp99\" (UniqueName: \"kubernetes.io/projected/49858184-0ffa-49c8-8fd5-5e3935eb70f9-kube-api-access-4jp99\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.716206 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-config-data\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.716294 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-public-tls-certs\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.716601 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-combined-ca-bundle\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.716793 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49858184-0ffa-49c8-8fd5-5e3935eb70f9-logs\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.724800 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.725142 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.727392 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-public-tls-certs\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.729014 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-combined-ca-bundle\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.730596 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-internal-tls-certs\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.731179 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-scripts\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.731819 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-config-data\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.736328 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-766b8ffffc-2xlnb"] Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.763815 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jp99\" (UniqueName: \"kubernetes.io/projected/49858184-0ffa-49c8-8fd5-5e3935eb70f9-kube-api-access-4jp99\") pod \"placement-5f47d9d48-pwkwm\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.818318 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-config-data\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.818659 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ff5z\" (UniqueName: \"kubernetes.io/projected/1554ac8b-466a-47d1-a768-b250ee1ca204-kube-api-access-5ff5z\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.818828 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-credential-keys\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.818968 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-fernet-keys\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.819088 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-internal-tls-certs\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.819279 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-scripts\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.819399 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-public-tls-certs\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.819543 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-combined-ca-bundle\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.849355 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.887276 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-z58jl"] Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.894286 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.924237 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-credential-keys\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.924377 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-fernet-keys\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.924453 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-internal-tls-certs\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.924585 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-scripts\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.924637 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-public-tls-certs\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.924728 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-combined-ca-bundle\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.924783 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-config-data\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.924823 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ff5z\" (UniqueName: \"kubernetes.io/projected/1554ac8b-466a-47d1-a768-b250ee1ca204-kube-api-access-5ff5z\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.931523 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-859c78c974-8tzts"] Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.933769 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.938976 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.939199 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.940080 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-w4lpd" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.940995 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-fernet-keys\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.944913 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-internal-tls-certs\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.951111 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-combined-ca-bundle\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.964848 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-credential-keys\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.970847 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-scripts\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.980842 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-public-tls-certs\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:40 crc kubenswrapper[4717]: I0218 12:07:40.983529 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1554ac8b-466a-47d1-a768-b250ee1ca204-config-data\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.000664 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-z58jl"] Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.023868 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ff5z\" (UniqueName: \"kubernetes.io/projected/1554ac8b-466a-47d1-a768-b250ee1ca204-kube-api-access-5ff5z\") pod \"keystone-766b8ffffc-2xlnb\" (UID: \"1554ac8b-466a-47d1-a768-b250ee1ca204\") " pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.026945 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.027075 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-config\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.027137 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.027179 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.027229 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtwzr\" (UniqueName: \"kubernetes.io/projected/22aba54a-8df1-4b52-821f-c25b7ff37d18-kube-api-access-vtwzr\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.027292 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-combined-ca-bundle\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.027325 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22aba54a-8df1-4b52-821f-c25b7ff37d18-logs\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.027379 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.027408 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data-custom\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.027447 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.027490 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qrn\" (UniqueName: \"kubernetes.io/projected/0125b53d-0ef7-441e-99a0-118f348d0bb1-kube-api-access-q4qrn\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.039671 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-859c78c974-8tzts"] Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.099220 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.099504 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.099539 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-698f4fc767-8w7f5"] Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.102565 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.108068 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.125952 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-698f4fc767-8w7f5"] Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.129796 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.129890 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-config\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.129941 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.129971 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.130003 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtwzr\" (UniqueName: \"kubernetes.io/projected/22aba54a-8df1-4b52-821f-c25b7ff37d18-kube-api-access-vtwzr\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.130033 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-combined-ca-bundle\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.130054 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22aba54a-8df1-4b52-821f-c25b7ff37d18-logs\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.130085 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.130106 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data-custom\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.130133 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.130161 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qrn\" (UniqueName: \"kubernetes.io/projected/0125b53d-0ef7-441e-99a0-118f348d0bb1-kube-api-access-q4qrn\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.131884 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.132828 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-config\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.141394 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.142086 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.142193 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22aba54a-8df1-4b52-821f-c25b7ff37d18-logs\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.142203 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.145007 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.150123 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-combined-ca-bundle\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.155565 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.155633 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.170569 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtwzr\" (UniqueName: \"kubernetes.io/projected/22aba54a-8df1-4b52-821f-c25b7ff37d18-kube-api-access-vtwzr\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.161811 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.177933 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data-custom\") pod \"barbican-keystone-listener-859c78c974-8tzts\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.194068 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qrn\" (UniqueName: \"kubernetes.io/projected/0125b53d-0ef7-441e-99a0-118f348d0bb1-kube-api-access-q4qrn\") pod \"dnsmasq-dns-586bdc5f9-z58jl\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.194580 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.212960 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b857cc988-tklbq"] Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.216138 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.248205 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-combined-ca-bundle\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.248716 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgl99\" (UniqueName: \"kubernetes.io/projected/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-kube-api-access-dgl99\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.248866 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.249083 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-logs\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.249214 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data-custom\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.297993 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.318870 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b857cc988-tklbq"] Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.334740 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8649d4b975-zblq7"] Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.341427 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.358298 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.363075 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data-custom\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.363293 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-combined-ca-bundle\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.363443 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgl99\" (UniqueName: \"kubernetes.io/projected/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-kube-api-access-dgl99\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.363494 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.363608 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-logs\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.365709 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8649d4b975-zblq7"] Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.370370 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.371828 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-logs\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.377152 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.379544 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.415617 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgl99\" (UniqueName: \"kubernetes.io/projected/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-kube-api-access-dgl99\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.424553 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-combined-ca-bundle\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.435817 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data-custom\") pod \"barbican-worker-698f4fc767-8w7f5\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.472595 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.550937 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-config-data-custom\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.551044 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ssqh\" (UniqueName: \"kubernetes.io/projected/45e47daf-054d-4262-b76c-349fb97ec950-kube-api-access-4ssqh\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.551091 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e47daf-054d-4262-b76c-349fb97ec950-config-data\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.551127 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf8v4\" (UniqueName: \"kubernetes.io/projected/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-kube-api-access-sf8v4\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.551213 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-combined-ca-bundle\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.560604 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-logs\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.561234 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e47daf-054d-4262-b76c-349fb97ec950-config-data-custom\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.561298 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e47daf-054d-4262-b76c-349fb97ec950-logs\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.561339 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e47daf-054d-4262-b76c-349fb97ec950-combined-ca-bundle\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.568048 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.574877 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-config-data\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.643234 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f66d9f78-rmqt2"] Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.679871 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.684352 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.707356 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f66d9f78-rmqt2"] Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730365 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-config-data-custom\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730428 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ssqh\" (UniqueName: \"kubernetes.io/projected/45e47daf-054d-4262-b76c-349fb97ec950-kube-api-access-4ssqh\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730452 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e47daf-054d-4262-b76c-349fb97ec950-config-data\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730474 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf8v4\" (UniqueName: \"kubernetes.io/projected/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-kube-api-access-sf8v4\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730512 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-combined-ca-bundle\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730538 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-combined-ca-bundle\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730559 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-logs\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730581 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-logs\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730616 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data-custom\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730640 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e47daf-054d-4262-b76c-349fb97ec950-config-data-custom\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730657 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e47daf-054d-4262-b76c-349fb97ec950-logs\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730674 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e47daf-054d-4262-b76c-349fb97ec950-combined-ca-bundle\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730711 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-config-data\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730801 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqvl\" (UniqueName: \"kubernetes.io/projected/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-kube-api-access-mnqvl\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.730851 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.753839 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e47daf-054d-4262-b76c-349fb97ec950-logs\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.754692 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-logs\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.794000 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ssqh\" (UniqueName: \"kubernetes.io/projected/45e47daf-054d-4262-b76c-349fb97ec950-kube-api-access-4ssqh\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.797127 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e47daf-054d-4262-b76c-349fb97ec950-config-data-custom\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.797793 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e47daf-054d-4262-b76c-349fb97ec950-combined-ca-bundle\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.801073 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-config-data-custom\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.802100 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e47daf-054d-4262-b76c-349fb97ec950-config-data\") pod \"barbican-worker-8649d4b975-zblq7\" (UID: \"45e47daf-054d-4262-b76c-349fb97ec950\") " pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.813897 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-combined-ca-bundle\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.826227 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf8v4\" (UniqueName: \"kubernetes.io/projected/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-kube-api-access-sf8v4\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.830951 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c182ca-f340-4ddf-ac38-b5eba6d9dbe5-config-data\") pod \"barbican-keystone-listener-6b857cc988-tklbq\" (UID: \"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5\") " pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.832942 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqvl\" (UniqueName: \"kubernetes.io/projected/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-kube-api-access-mnqvl\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.833030 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.834423 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-combined-ca-bundle\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.834462 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-logs\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.834505 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data-custom\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.836179 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-logs\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.843999 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.861813 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data-custom\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.866074 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-combined-ca-bundle\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.873307 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqvl\" (UniqueName: \"kubernetes.io/projected/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-kube-api-access-mnqvl\") pod \"barbican-api-7f66d9f78-rmqt2\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.910808 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.929229 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.929291 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.929312 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:41 crc kubenswrapper[4717]: I0218 12:07:41.929418 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 12:07:42 crc kubenswrapper[4717]: I0218 12:07:42.028925 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f47d9d48-pwkwm"] Feb 18 12:07:42 crc kubenswrapper[4717]: I0218 12:07:42.058495 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8649d4b975-zblq7" Feb 18 12:07:42 crc kubenswrapper[4717]: I0218 12:07:42.116593 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:42 crc kubenswrapper[4717]: I0218 12:07:42.352134 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-766b8ffffc-2xlnb"] Feb 18 12:07:42 crc kubenswrapper[4717]: I0218 12:07:42.587221 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-698f4fc767-8w7f5"] Feb 18 12:07:42 crc kubenswrapper[4717]: I0218 12:07:42.600521 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-z58jl"] Feb 18 12:07:42 crc kubenswrapper[4717]: I0218 12:07:42.811485 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-859c78c974-8tzts"] Feb 18 12:07:42 crc kubenswrapper[4717]: I0218 12:07:42.837325 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b857cc988-tklbq"] Feb 18 12:07:42 crc kubenswrapper[4717]: I0218 12:07:42.962249 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8649d4b975-zblq7"] Feb 18 12:07:42 crc kubenswrapper[4717]: W0218 12:07:42.973936 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45e47daf_054d_4262_b76c_349fb97ec950.slice/crio-6c6bbfbd4adac7857080a6787d2033898c5498ee415fb651a4fa0d384f4ac5b5 WatchSource:0}: Error finding container 6c6bbfbd4adac7857080a6787d2033898c5498ee415fb651a4fa0d384f4ac5b5: Status 404 returned error can't find the container with id 6c6bbfbd4adac7857080a6787d2033898c5498ee415fb651a4fa0d384f4ac5b5 Feb 18 12:07:42 crc kubenswrapper[4717]: I0218 12:07:42.993961 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766b8ffffc-2xlnb" event={"ID":"1554ac8b-466a-47d1-a768-b250ee1ca204","Type":"ContainerStarted","Data":"0f2ac9be813f7718ff4286676296b2926e7409f467ff28d868fdb37dbed1c4a9"} Feb 18 12:07:43 crc kubenswrapper[4717]: I0218 12:07:43.021524 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" event={"ID":"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5","Type":"ContainerStarted","Data":"83326dcf681e0ea212051115b6b90f537961129e8d9ad85de74e05a7b370f672"} Feb 18 12:07:43 crc kubenswrapper[4717]: I0218 12:07:43.030291 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698f4fc767-8w7f5" event={"ID":"9a262b0a-4d1c-46ba-b281-d95194a8bfa2","Type":"ContainerStarted","Data":"f705caa3048b337b6c4aa5775a308f5d6fe2f88422c95e9936590ce9020f8256"} Feb 18 12:07:43 crc kubenswrapper[4717]: I0218 12:07:43.050796 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f47d9d48-pwkwm" event={"ID":"49858184-0ffa-49c8-8fd5-5e3935eb70f9","Type":"ContainerStarted","Data":"c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd"} Feb 18 12:07:43 crc kubenswrapper[4717]: I0218 12:07:43.050841 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f47d9d48-pwkwm" event={"ID":"49858184-0ffa-49c8-8fd5-5e3935eb70f9","Type":"ContainerStarted","Data":"97895495b89f509f05e8c150508eff87181e286f1bba433376d5943acd952f01"} Feb 18 12:07:43 crc kubenswrapper[4717]: I0218 12:07:43.055716 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" event={"ID":"22aba54a-8df1-4b52-821f-c25b7ff37d18","Type":"ContainerStarted","Data":"55bced52c29e1f4cf134a2b280794121b26e9d8e71f6d0182b6b6489567ca405"} Feb 18 12:07:43 crc kubenswrapper[4717]: I0218 12:07:43.072551 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" event={"ID":"0125b53d-0ef7-441e-99a0-118f348d0bb1","Type":"ContainerStarted","Data":"d133b1abcfb54139ff19bff1b9bd37688aa46fdfc9720d7d83c910856b8868f5"} Feb 18 12:07:43 crc kubenswrapper[4717]: I0218 12:07:43.100201 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f66d9f78-rmqt2"] Feb 18 12:07:43 crc kubenswrapper[4717]: W0218 12:07:43.197531 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08ddc7c0_43cb_4abd_bcab_b2998ffbae42.slice/crio-e168fda5423430464fe9e40d6d34021ab329759f5fe243c33abb30e91b07c1a8 WatchSource:0}: Error finding container e168fda5423430464fe9e40d6d34021ab329759f5fe243c33abb30e91b07c1a8: Status 404 returned error can't find the container with id e168fda5423430464fe9e40d6d34021ab329759f5fe243c33abb30e91b07c1a8 Feb 18 12:07:43 crc kubenswrapper[4717]: I0218 12:07:43.213334 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b5f4c76fb-t68w8" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 18 12:07:43 crc kubenswrapper[4717]: I0218 12:07:43.416864 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d949b4564-9ns6m" podUID="72d097e6-a40b-4e3e-8376-f3866f63e9d3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.157728 4717 generic.go:334] "Generic (PLEG): container finished" podID="0125b53d-0ef7-441e-99a0-118f348d0bb1" containerID="f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6" exitCode=0 Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.158469 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" event={"ID":"0125b53d-0ef7-441e-99a0-118f348d0bb1","Type":"ContainerDied","Data":"f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6"} Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.174143 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766b8ffffc-2xlnb" event={"ID":"1554ac8b-466a-47d1-a768-b250ee1ca204","Type":"ContainerStarted","Data":"5eec4ee9c6196f1a1e0a21c900b15ded4ad09693b0b9a6dcb34a2a89cc3e41b6"} Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.174461 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.190417 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8649d4b975-zblq7" event={"ID":"45e47daf-054d-4262-b76c-349fb97ec950","Type":"ContainerStarted","Data":"6c6bbfbd4adac7857080a6787d2033898c5498ee415fb651a4fa0d384f4ac5b5"} Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.229349 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-766b8ffffc-2xlnb" podStartSLOduration=4.229322286 podStartE2EDuration="4.229322286s" podCreationTimestamp="2026-02-18 12:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:44.220997595 +0000 UTC m=+1098.623098911" watchObservedRunningTime="2026-02-18 12:07:44.229322286 +0000 UTC m=+1098.631423602" Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.251902 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f66d9f78-rmqt2" event={"ID":"08ddc7c0-43cb-4abd-bcab-b2998ffbae42","Type":"ContainerStarted","Data":"054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520"} Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.251978 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f66d9f78-rmqt2" event={"ID":"08ddc7c0-43cb-4abd-bcab-b2998ffbae42","Type":"ContainerStarted","Data":"e168fda5423430464fe9e40d6d34021ab329759f5fe243c33abb30e91b07c1a8"} Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.266246 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.266288 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.266452 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f47d9d48-pwkwm" event={"ID":"49858184-0ffa-49c8-8fd5-5e3935eb70f9","Type":"ContainerStarted","Data":"f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36"} Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.266619 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.266629 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 12:07:44 crc kubenswrapper[4717]: I0218 12:07:44.322575 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f47d9d48-pwkwm" podStartSLOduration=4.322545985 podStartE2EDuration="4.322545985s" podCreationTimestamp="2026-02-18 12:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:44.315300864 +0000 UTC m=+1098.717402200" watchObservedRunningTime="2026-02-18 12:07:44.322545985 +0000 UTC m=+1098.724647301" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.224698 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55d8f77d98-h2lh4"] Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.227287 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.233203 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.238144 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.262874 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55d8f77d98-h2lh4"] Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.302585 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-logs\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.302645 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-config-data\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.302677 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-public-tls-certs\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.302700 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjgq9\" (UniqueName: \"kubernetes.io/projected/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-kube-api-access-qjgq9\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.302894 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-config-data-custom\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.302917 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-internal-tls-certs\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.302964 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-combined-ca-bundle\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.303333 4717 generic.go:334] "Generic (PLEG): container finished" podID="c7893e53-3622-4455-8eb8-459235541b6a" containerID="07d30e4c9eded45add94f59d8c81ef3846f9a2962ab1a8a899fd7e811abb205d" exitCode=0 Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.303477 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-77mb9" event={"ID":"c7893e53-3622-4455-8eb8-459235541b6a","Type":"ContainerDied","Data":"07d30e4c9eded45add94f59d8c81ef3846f9a2962ab1a8a899fd7e811abb205d"} Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.311007 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f66d9f78-rmqt2" event={"ID":"08ddc7c0-43cb-4abd-bcab-b2998ffbae42","Type":"ContainerStarted","Data":"8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd"} Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.311543 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.311564 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.372459 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f66d9f78-rmqt2" podStartSLOduration=4.372427116 podStartE2EDuration="4.372427116s" podCreationTimestamp="2026-02-18 12:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:45.348322116 +0000 UTC m=+1099.750423442" watchObservedRunningTime="2026-02-18 12:07:45.372427116 +0000 UTC m=+1099.774528432" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.405234 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-logs\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.405304 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-config-data\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.405326 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-public-tls-certs\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.405342 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjgq9\" (UniqueName: \"kubernetes.io/projected/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-kube-api-access-qjgq9\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.405541 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-config-data-custom\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.405575 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-internal-tls-certs\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.405627 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-combined-ca-bundle\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.412247 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-logs\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.416667 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-config-data-custom\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.418531 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-internal-tls-certs\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.419515 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-combined-ca-bundle\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.420543 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-public-tls-certs\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.430540 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-config-data\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.475053 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjgq9\" (UniqueName: \"kubernetes.io/projected/afc0d8b7-77e8-4fa8-8fea-70d32de7045c-kube-api-access-qjgq9\") pod \"barbican-api-55d8f77d98-h2lh4\" (UID: \"afc0d8b7-77e8-4fa8-8fea-70d32de7045c\") " pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.549747 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.953351 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.953969 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 12:07:45 crc kubenswrapper[4717]: I0218 12:07:45.956701 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.176687 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55d8f77d98-h2lh4"] Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.181671 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.181811 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.351809 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pdj8h" event={"ID":"70292f47-9494-42eb-a5ae-041c4bfc01ea","Type":"ContainerStarted","Data":"a254e6c74bfffe4657268329b67107f0c65112ce21409e3a6599de37ad3c0fd6"} Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.363431 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" event={"ID":"0125b53d-0ef7-441e-99a0-118f348d0bb1","Type":"ContainerStarted","Data":"5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1"} Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.364882 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.364933 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.364954 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.374391 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-pdj8h" podStartSLOduration=6.702977554 podStartE2EDuration="1m3.374367145s" podCreationTimestamp="2026-02-18 12:06:43 +0000 UTC" firstStartedPulling="2026-02-18 12:06:46.058992508 +0000 UTC m=+1040.461093824" lastFinishedPulling="2026-02-18 12:07:42.730382099 +0000 UTC m=+1097.132483415" observedRunningTime="2026-02-18 12:07:46.370866534 +0000 UTC m=+1100.772967850" watchObservedRunningTime="2026-02-18 12:07:46.374367145 +0000 UTC m=+1100.776468461" Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.410910 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" podStartSLOduration=6.410881146 podStartE2EDuration="6.410881146s" podCreationTimestamp="2026-02-18 12:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:46.409994921 +0000 UTC m=+1100.812096237" watchObservedRunningTime="2026-02-18 12:07:46.410881146 +0000 UTC m=+1100.812982462" Feb 18 12:07:46 crc kubenswrapper[4717]: I0218 12:07:46.710770 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.407308 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-77mb9" event={"ID":"c7893e53-3622-4455-8eb8-459235541b6a","Type":"ContainerDied","Data":"334eaa8fa8b2ccdf115e5670afdeb886df16ddca163aef6ea4f2848e641182d1"} Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.407808 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="334eaa8fa8b2ccdf115e5670afdeb886df16ddca163aef6ea4f2848e641182d1" Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.412759 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55d8f77d98-h2lh4" event={"ID":"afc0d8b7-77e8-4fa8-8fea-70d32de7045c","Type":"ContainerStarted","Data":"d876f41e5380ac086f333426529e20e1c25e774ef3aa1a7fca54aaf6e73722b8"} Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.484792 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-77mb9" Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.674806 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxgvq\" (UniqueName: \"kubernetes.io/projected/c7893e53-3622-4455-8eb8-459235541b6a-kube-api-access-wxgvq\") pod \"c7893e53-3622-4455-8eb8-459235541b6a\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.674890 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-combined-ca-bundle\") pod \"c7893e53-3622-4455-8eb8-459235541b6a\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.674969 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-config\") pod \"c7893e53-3622-4455-8eb8-459235541b6a\" (UID: \"c7893e53-3622-4455-8eb8-459235541b6a\") " Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.697689 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7893e53-3622-4455-8eb8-459235541b6a-kube-api-access-wxgvq" (OuterVolumeSpecName: "kube-api-access-wxgvq") pod "c7893e53-3622-4455-8eb8-459235541b6a" (UID: "c7893e53-3622-4455-8eb8-459235541b6a"). InnerVolumeSpecName "kube-api-access-wxgvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.709115 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7893e53-3622-4455-8eb8-459235541b6a" (UID: "c7893e53-3622-4455-8eb8-459235541b6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.731377 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-config" (OuterVolumeSpecName: "config") pod "c7893e53-3622-4455-8eb8-459235541b6a" (UID: "c7893e53-3622-4455-8eb8-459235541b6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.778183 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.782985 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxgvq\" (UniqueName: \"kubernetes.io/projected/c7893e53-3622-4455-8eb8-459235541b6a-kube-api-access-wxgvq\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:47 crc kubenswrapper[4717]: I0218 12:07:47.783038 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7893e53-3622-4455-8eb8-459235541b6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.423585 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-77mb9" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.781319 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-z58jl"] Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.815655 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ppd2g"] Feb 18 12:07:48 crc kubenswrapper[4717]: E0218 12:07:48.816640 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7893e53-3622-4455-8eb8-459235541b6a" containerName="neutron-db-sync" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.816740 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7893e53-3622-4455-8eb8-459235541b6a" containerName="neutron-db-sync" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.817003 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7893e53-3622-4455-8eb8-459235541b6a" containerName="neutron-db-sync" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.818301 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.930709 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ppd2g"] Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.942808 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.942869 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.942917 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.942944 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.943036 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-config\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:48 crc kubenswrapper[4717]: I0218 12:07:48.943101 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvsls\" (UniqueName: \"kubernetes.io/projected/0a03e161-b29e-4556-b3c2-890a4ecf2885-kube-api-access-kvsls\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.044636 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.044724 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.044786 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.044828 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.044938 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-config\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.045032 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvsls\" (UniqueName: \"kubernetes.io/projected/0a03e161-b29e-4556-b3c2-890a4ecf2885-kube-api-access-kvsls\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.048876 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.049530 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.049647 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-config\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.050846 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.060750 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.092660 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8584d7b78b-d8rzh"] Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.094947 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.100000 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8584d7b78b-d8rzh"] Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.100525 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rp5hc" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.100796 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.100963 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.101300 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.163646 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvsls\" (UniqueName: \"kubernetes.io/projected/0a03e161-b29e-4556-b3c2-890a4ecf2885-kube-api-access-kvsls\") pod \"dnsmasq-dns-85ff748b95-ppd2g\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.179662 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.251648 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-config\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.251752 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-combined-ca-bundle\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.251777 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4zv4\" (UniqueName: \"kubernetes.io/projected/46f23333-73d2-4175-85f5-9ceb356a42ad-kube-api-access-j4zv4\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.251857 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-ovndb-tls-certs\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.251878 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-httpd-config\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.356019 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-config\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.356127 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-combined-ca-bundle\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.356165 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4zv4\" (UniqueName: \"kubernetes.io/projected/46f23333-73d2-4175-85f5-9ceb356a42ad-kube-api-access-j4zv4\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.356356 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-ovndb-tls-certs\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.356386 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-httpd-config\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.366692 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-config\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.366922 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-httpd-config\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.367739 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-ovndb-tls-certs\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.376451 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-combined-ca-bundle\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.404948 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4zv4\" (UniqueName: \"kubernetes.io/projected/46f23333-73d2-4175-85f5-9ceb356a42ad-kube-api-access-j4zv4\") pod \"neutron-8584d7b78b-d8rzh\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.435962 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" podUID="0125b53d-0ef7-441e-99a0-118f348d0bb1" containerName="dnsmasq-dns" containerID="cri-o://5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1" gracePeriod=10 Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.625099 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:07:49 crc kubenswrapper[4717]: I0218 12:07:49.969669 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ppd2g"] Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.458436 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55d8f77d98-h2lh4" event={"ID":"afc0d8b7-77e8-4fa8-8fea-70d32de7045c","Type":"ContainerStarted","Data":"9ca982f070f3cdcc81573fef0e95d6ce31ea7c11875fe83ecd47b75cf01dbb5e"} Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.458968 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8584d7b78b-d8rzh"] Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.461114 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8649d4b975-zblq7" event={"ID":"45e47daf-054d-4262-b76c-349fb97ec950","Type":"ContainerStarted","Data":"d0251922d07cb456591e625b99789dbe56eb7bcfe59c2f29c78a5372f92b751d"} Feb 18 12:07:50 crc kubenswrapper[4717]: W0218 12:07:50.470664 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46f23333_73d2_4175_85f5_9ceb356a42ad.slice/crio-eb39ac4a629f3d69cd9553f60b3f43192fd9a762c8f82b201f6acde8c25b8494 WatchSource:0}: Error finding container eb39ac4a629f3d69cd9553f60b3f43192fd9a762c8f82b201f6acde8c25b8494: Status 404 returned error can't find the container with id eb39ac4a629f3d69cd9553f60b3f43192fd9a762c8f82b201f6acde8c25b8494 Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.470964 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" event={"ID":"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5","Type":"ContainerStarted","Data":"87c6fe5f4bfafc28396d98c54f070cf3b0d0053b3b3fe30c062b3e382ccab607"} Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.476422 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698f4fc767-8w7f5" event={"ID":"9a262b0a-4d1c-46ba-b281-d95194a8bfa2","Type":"ContainerStarted","Data":"70e84b0a71afc8b72c8be2939152aa334d677f9629452228d6a619a9e3118d52"} Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.482732 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" event={"ID":"22aba54a-8df1-4b52-821f-c25b7ff37d18","Type":"ContainerStarted","Data":"55a3e2f5c403b74702580205b3a8e2a10afa0e1b1f76a63d6f5dd5e89637f595"} Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.485417 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.486245 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" event={"ID":"0a03e161-b29e-4556-b3c2-890a4ecf2885","Type":"ContainerStarted","Data":"f7f51fce7c3629c411a779a3cdef10ae8d3757378427f8be4c630c8477e7977f"} Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.493291 4717 generic.go:334] "Generic (PLEG): container finished" podID="0125b53d-0ef7-441e-99a0-118f348d0bb1" containerID="5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1" exitCode=0 Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.493360 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" event={"ID":"0125b53d-0ef7-441e-99a0-118f348d0bb1","Type":"ContainerDied","Data":"5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1"} Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.493398 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" event={"ID":"0125b53d-0ef7-441e-99a0-118f348d0bb1","Type":"ContainerDied","Data":"d133b1abcfb54139ff19bff1b9bd37688aa46fdfc9720d7d83c910856b8868f5"} Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.493418 4717 scope.go:117] "RemoveContainer" containerID="5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1" Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.493586 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-z58jl" Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.554405 4717 scope.go:117] "RemoveContainer" containerID="f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6" Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.605927 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-sb\") pod \"0125b53d-0ef7-441e-99a0-118f348d0bb1\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.606073 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-svc\") pod \"0125b53d-0ef7-441e-99a0-118f348d0bb1\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.606136 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4qrn\" (UniqueName: \"kubernetes.io/projected/0125b53d-0ef7-441e-99a0-118f348d0bb1-kube-api-access-q4qrn\") pod \"0125b53d-0ef7-441e-99a0-118f348d0bb1\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.606170 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-nb\") pod \"0125b53d-0ef7-441e-99a0-118f348d0bb1\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.606426 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-config\") pod \"0125b53d-0ef7-441e-99a0-118f348d0bb1\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.606467 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-swift-storage-0\") pod \"0125b53d-0ef7-441e-99a0-118f348d0bb1\" (UID: \"0125b53d-0ef7-441e-99a0-118f348d0bb1\") " Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.617743 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0125b53d-0ef7-441e-99a0-118f348d0bb1-kube-api-access-q4qrn" (OuterVolumeSpecName: "kube-api-access-q4qrn") pod "0125b53d-0ef7-441e-99a0-118f348d0bb1" (UID: "0125b53d-0ef7-441e-99a0-118f348d0bb1"). InnerVolumeSpecName "kube-api-access-q4qrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.631270 4717 scope.go:117] "RemoveContainer" containerID="5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1" Feb 18 12:07:50 crc kubenswrapper[4717]: E0218 12:07:50.632535 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1\": container with ID starting with 5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1 not found: ID does not exist" containerID="5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1" Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.632579 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1"} err="failed to get container status \"5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1\": rpc error: code = NotFound desc = could not find container \"5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1\": container with ID starting with 5dd0b74b0f72230c4a8a9eb607018f00b951ed6038b7bb75dae34f4f5534fde1 not found: ID does not exist" Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.632609 4717 scope.go:117] "RemoveContainer" containerID="f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6" Feb 18 12:07:50 crc kubenswrapper[4717]: E0218 12:07:50.640068 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6\": container with ID starting with f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6 not found: ID does not exist" containerID="f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6" Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.640135 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6"} err="failed to get container status \"f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6\": rpc error: code = NotFound desc = could not find container \"f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6\": container with ID starting with f93e0d7f681dc49e7e58ba27bdd06555fd2e9d6662c0ab0694aede69bdc694f6 not found: ID does not exist" Feb 18 12:07:50 crc kubenswrapper[4717]: I0218 12:07:50.711314 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4qrn\" (UniqueName: \"kubernetes.io/projected/0125b53d-0ef7-441e-99a0-118f348d0bb1-kube-api-access-q4qrn\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.018353 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0125b53d-0ef7-441e-99a0-118f348d0bb1" (UID: "0125b53d-0ef7-441e-99a0-118f348d0bb1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.018950 4717 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.029406 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0125b53d-0ef7-441e-99a0-118f348d0bb1" (UID: "0125b53d-0ef7-441e-99a0-118f348d0bb1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.121949 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.150375 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0125b53d-0ef7-441e-99a0-118f348d0bb1" (UID: "0125b53d-0ef7-441e-99a0-118f348d0bb1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.151600 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-config" (OuterVolumeSpecName: "config") pod "0125b53d-0ef7-441e-99a0-118f348d0bb1" (UID: "0125b53d-0ef7-441e-99a0-118f348d0bb1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.174965 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0125b53d-0ef7-441e-99a0-118f348d0bb1" (UID: "0125b53d-0ef7-441e-99a0-118f348d0bb1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.224336 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.224589 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.224649 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0125b53d-0ef7-441e-99a0-118f348d0bb1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.437462 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-z58jl"] Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.455673 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-z58jl"] Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.515685 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" event={"ID":"22aba54a-8df1-4b52-821f-c25b7ff37d18","Type":"ContainerStarted","Data":"2412adada9e5f4150422e207087565feb8dd7c4139158d0fb4219e4afb062d0f"} Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.525403 4717 generic.go:334] "Generic (PLEG): container finished" podID="0a03e161-b29e-4556-b3c2-890a4ecf2885" containerID="4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3" exitCode=0 Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.525720 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" event={"ID":"0a03e161-b29e-4556-b3c2-890a4ecf2885","Type":"ContainerDied","Data":"4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3"} Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.540498 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" podStartSLOduration=5.499062136 podStartE2EDuration="11.540462412s" podCreationTimestamp="2026-02-18 12:07:40 +0000 UTC" firstStartedPulling="2026-02-18 12:07:42.844077763 +0000 UTC m=+1097.246179079" lastFinishedPulling="2026-02-18 12:07:48.885478039 +0000 UTC m=+1103.287579355" observedRunningTime="2026-02-18 12:07:51.537341381 +0000 UTC m=+1105.939442697" watchObservedRunningTime="2026-02-18 12:07:51.540462412 +0000 UTC m=+1105.942563728" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.551917 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8584d7b78b-d8rzh" event={"ID":"46f23333-73d2-4175-85f5-9ceb356a42ad","Type":"ContainerStarted","Data":"f7101619926bdd05c3451459c84628bc5fab3e9a42bf113b23a38797ae940586"} Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.552337 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8584d7b78b-d8rzh" event={"ID":"46f23333-73d2-4175-85f5-9ceb356a42ad","Type":"ContainerStarted","Data":"eb39ac4a629f3d69cd9553f60b3f43192fd9a762c8f82b201f6acde8c25b8494"} Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.583869 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55d8f77d98-h2lh4" event={"ID":"afc0d8b7-77e8-4fa8-8fea-70d32de7045c","Type":"ContainerStarted","Data":"7ba9d54d4d8c68fe351347303abfb323c35e701e612fedc0efe84a4d4aa34986"} Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.585307 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.585658 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.608956 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8649d4b975-zblq7" event={"ID":"45e47daf-054d-4262-b76c-349fb97ec950","Type":"ContainerStarted","Data":"8f682c615658be43430791153855bd67e2688c186accb6a3d754ce0e7ecbc339"} Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.637842 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" event={"ID":"42c182ca-f340-4ddf-ac38-b5eba6d9dbe5","Type":"ContainerStarted","Data":"fe37272a623e449c9b44cae342afb3cffda39198a808be6f87a9520a96902077"} Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.653458 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698f4fc767-8w7f5" event={"ID":"9a262b0a-4d1c-46ba-b281-d95194a8bfa2","Type":"ContainerStarted","Data":"8e37bf7109799268f1e7fd5e109350e2ca472fc89a6970764c2bae8e20bb8940"} Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.656598 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55d8f77d98-h2lh4" podStartSLOduration=6.656569085 podStartE2EDuration="6.656569085s" podCreationTimestamp="2026-02-18 12:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:07:51.638721387 +0000 UTC m=+1106.040822703" watchObservedRunningTime="2026-02-18 12:07:51.656569085 +0000 UTC m=+1106.058670401" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.713663 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b857cc988-tklbq" podStartSLOduration=4.7188361610000005 podStartE2EDuration="10.713627633s" podCreationTimestamp="2026-02-18 12:07:41 +0000 UTC" firstStartedPulling="2026-02-18 12:07:42.892437507 +0000 UTC m=+1097.294538823" lastFinishedPulling="2026-02-18 12:07:48.887228979 +0000 UTC m=+1103.289330295" observedRunningTime="2026-02-18 12:07:51.65913902 +0000 UTC m=+1106.061240336" watchObservedRunningTime="2026-02-18 12:07:51.713627633 +0000 UTC m=+1106.115728949" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.743142 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8649d4b975-zblq7" podStartSLOduration=4.834926403 podStartE2EDuration="10.743111069s" podCreationTimestamp="2026-02-18 12:07:41 +0000 UTC" firstStartedPulling="2026-02-18 12:07:42.979874808 +0000 UTC m=+1097.381976134" lastFinishedPulling="2026-02-18 12:07:48.888059484 +0000 UTC m=+1103.290160800" observedRunningTime="2026-02-18 12:07:51.694901929 +0000 UTC m=+1106.097003245" watchObservedRunningTime="2026-02-18 12:07:51.743111069 +0000 UTC m=+1106.145212385" Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.801241 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-859c78c974-8tzts"] Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.861956 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-698f4fc767-8w7f5"] Feb 18 12:07:51 crc kubenswrapper[4717]: I0218 12:07:51.888238 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-698f4fc767-8w7f5" podStartSLOduration=5.704999369 podStartE2EDuration="11.888178284s" podCreationTimestamp="2026-02-18 12:07:40 +0000 UTC" firstStartedPulling="2026-02-18 12:07:42.706549767 +0000 UTC m=+1097.108651083" lastFinishedPulling="2026-02-18 12:07:48.889728682 +0000 UTC m=+1103.291829998" observedRunningTime="2026-02-18 12:07:51.771129413 +0000 UTC m=+1106.173230729" watchObservedRunningTime="2026-02-18 12:07:51.888178284 +0000 UTC m=+1106.290279610" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.634093 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7bf89fd777-rchb4"] Feb 18 12:07:52 crc kubenswrapper[4717]: E0218 12:07:52.634638 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0125b53d-0ef7-441e-99a0-118f348d0bb1" containerName="dnsmasq-dns" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.634655 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0125b53d-0ef7-441e-99a0-118f348d0bb1" containerName="dnsmasq-dns" Feb 18 12:07:52 crc kubenswrapper[4717]: E0218 12:07:52.634680 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0125b53d-0ef7-441e-99a0-118f348d0bb1" containerName="init" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.634687 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0125b53d-0ef7-441e-99a0-118f348d0bb1" containerName="init" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.634887 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0125b53d-0ef7-441e-99a0-118f348d0bb1" containerName="dnsmasq-dns" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.641600 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.646613 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.657174 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bf89fd777-rchb4"] Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.660155 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.733800 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltd9s\" (UniqueName: \"kubernetes.io/projected/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-kube-api-access-ltd9s\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.734411 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-public-tls-certs\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.734542 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-combined-ca-bundle\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.734684 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-httpd-config\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.734936 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-internal-tls-certs\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.735072 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-config\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.735186 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-ovndb-tls-certs\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.837363 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-internal-tls-certs\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.837482 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-config\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.837509 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-ovndb-tls-certs\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.837570 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltd9s\" (UniqueName: \"kubernetes.io/projected/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-kube-api-access-ltd9s\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.837597 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-public-tls-certs\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.837616 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-combined-ca-bundle\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.837686 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-httpd-config\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.850939 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-combined-ca-bundle\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.858187 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-internal-tls-certs\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.863240 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-config\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.870124 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-ovndb-tls-certs\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.873124 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-httpd-config\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.880603 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltd9s\" (UniqueName: \"kubernetes.io/projected/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-kube-api-access-ltd9s\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:52 crc kubenswrapper[4717]: I0218 12:07:52.910979 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c-public-tls-certs\") pod \"neutron-7bf89fd777-rchb4\" (UID: \"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c\") " pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:53 crc kubenswrapper[4717]: I0218 12:07:53.006239 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:07:53 crc kubenswrapper[4717]: I0218 12:07:53.067683 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0125b53d-0ef7-441e-99a0-118f348d0bb1" path="/var/lib/kubelet/pods/0125b53d-0ef7-441e-99a0-118f348d0bb1/volumes" Feb 18 12:07:53 crc kubenswrapper[4717]: I0218 12:07:53.206848 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b5f4c76fb-t68w8" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 18 12:07:53 crc kubenswrapper[4717]: I0218 12:07:53.400762 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-d949b4564-9ns6m" podUID="72d097e6-a40b-4e3e-8376-f3866f63e9d3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 18 12:07:53 crc kubenswrapper[4717]: I0218 12:07:53.727197 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" podUID="22aba54a-8df1-4b52-821f-c25b7ff37d18" containerName="barbican-keystone-listener-log" containerID="cri-o://55a3e2f5c403b74702580205b3a8e2a10afa0e1b1f76a63d6f5dd5e89637f595" gracePeriod=30 Feb 18 12:07:53 crc kubenswrapper[4717]: I0218 12:07:53.727283 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" podUID="22aba54a-8df1-4b52-821f-c25b7ff37d18" containerName="barbican-keystone-listener" containerID="cri-o://2412adada9e5f4150422e207087565feb8dd7c4139158d0fb4219e4afb062d0f" gracePeriod=30 Feb 18 12:07:53 crc kubenswrapper[4717]: I0218 12:07:53.727359 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-698f4fc767-8w7f5" podUID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" containerName="barbican-worker-log" containerID="cri-o://70e84b0a71afc8b72c8be2939152aa334d677f9629452228d6a619a9e3118d52" gracePeriod=30 Feb 18 12:07:53 crc kubenswrapper[4717]: I0218 12:07:53.727431 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-698f4fc767-8w7f5" podUID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" containerName="barbican-worker" containerID="cri-o://8e37bf7109799268f1e7fd5e109350e2ca472fc89a6970764c2bae8e20bb8940" gracePeriod=30 Feb 18 12:07:54 crc kubenswrapper[4717]: E0218 12:07:54.480895 4717 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70292f47_9494_42eb_a5ae_041c4bfc01ea.slice/crio-conmon-a254e6c74bfffe4657268329b67107f0c65112ce21409e3a6599de37ad3c0fd6.scope\": RecentStats: unable to find data in memory cache]" Feb 18 12:07:54 crc kubenswrapper[4717]: I0218 12:07:54.751706 4717 generic.go:334] "Generic (PLEG): container finished" podID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" containerID="70e84b0a71afc8b72c8be2939152aa334d677f9629452228d6a619a9e3118d52" exitCode=143 Feb 18 12:07:54 crc kubenswrapper[4717]: I0218 12:07:54.751831 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698f4fc767-8w7f5" event={"ID":"9a262b0a-4d1c-46ba-b281-d95194a8bfa2","Type":"ContainerDied","Data":"70e84b0a71afc8b72c8be2939152aa334d677f9629452228d6a619a9e3118d52"} Feb 18 12:07:54 crc kubenswrapper[4717]: I0218 12:07:54.763846 4717 generic.go:334] "Generic (PLEG): container finished" podID="70292f47-9494-42eb-a5ae-041c4bfc01ea" containerID="a254e6c74bfffe4657268329b67107f0c65112ce21409e3a6599de37ad3c0fd6" exitCode=0 Feb 18 12:07:54 crc kubenswrapper[4717]: I0218 12:07:54.763931 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pdj8h" event={"ID":"70292f47-9494-42eb-a5ae-041c4bfc01ea","Type":"ContainerDied","Data":"a254e6c74bfffe4657268329b67107f0c65112ce21409e3a6599de37ad3c0fd6"} Feb 18 12:07:54 crc kubenswrapper[4717]: I0218 12:07:54.765992 4717 generic.go:334] "Generic (PLEG): container finished" podID="22aba54a-8df1-4b52-821f-c25b7ff37d18" containerID="55a3e2f5c403b74702580205b3a8e2a10afa0e1b1f76a63d6f5dd5e89637f595" exitCode=143 Feb 18 12:07:54 crc kubenswrapper[4717]: I0218 12:07:54.766048 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" event={"ID":"22aba54a-8df1-4b52-821f-c25b7ff37d18","Type":"ContainerDied","Data":"55a3e2f5c403b74702580205b3a8e2a10afa0e1b1f76a63d6f5dd5e89637f595"} Feb 18 12:07:54 crc kubenswrapper[4717]: I0218 12:07:54.987563 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:55 crc kubenswrapper[4717]: I0218 12:07:55.203730 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.402365 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.462198 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.476895 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-scripts\") pod \"70292f47-9494-42eb-a5ae-041c4bfc01ea\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.477027 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-combined-ca-bundle\") pod \"70292f47-9494-42eb-a5ae-041c4bfc01ea\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.477252 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/70292f47-9494-42eb-a5ae-041c4bfc01ea-etc-machine-id\") pod \"70292f47-9494-42eb-a5ae-041c4bfc01ea\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.477328 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njt9z\" (UniqueName: \"kubernetes.io/projected/70292f47-9494-42eb-a5ae-041c4bfc01ea-kube-api-access-njt9z\") pod \"70292f47-9494-42eb-a5ae-041c4bfc01ea\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.477360 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-config-data\") pod \"70292f47-9494-42eb-a5ae-041c4bfc01ea\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.477399 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-db-sync-config-data\") pod \"70292f47-9494-42eb-a5ae-041c4bfc01ea\" (UID: \"70292f47-9494-42eb-a5ae-041c4bfc01ea\") " Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.478427 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70292f47-9494-42eb-a5ae-041c4bfc01ea-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "70292f47-9494-42eb-a5ae-041c4bfc01ea" (UID: "70292f47-9494-42eb-a5ae-041c4bfc01ea"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.489213 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "70292f47-9494-42eb-a5ae-041c4bfc01ea" (UID: "70292f47-9494-42eb-a5ae-041c4bfc01ea"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.489296 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-scripts" (OuterVolumeSpecName: "scripts") pod "70292f47-9494-42eb-a5ae-041c4bfc01ea" (UID: "70292f47-9494-42eb-a5ae-041c4bfc01ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.490512 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70292f47-9494-42eb-a5ae-041c4bfc01ea-kube-api-access-njt9z" (OuterVolumeSpecName: "kube-api-access-njt9z") pod "70292f47-9494-42eb-a5ae-041c4bfc01ea" (UID: "70292f47-9494-42eb-a5ae-041c4bfc01ea"). InnerVolumeSpecName "kube-api-access-njt9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.508299 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70292f47-9494-42eb-a5ae-041c4bfc01ea" (UID: "70292f47-9494-42eb-a5ae-041c4bfc01ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.548014 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-config-data" (OuterVolumeSpecName: "config-data") pod "70292f47-9494-42eb-a5ae-041c4bfc01ea" (UID: "70292f47-9494-42eb-a5ae-041c4bfc01ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.579743 4717 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.579783 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.579794 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.579803 4717 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/70292f47-9494-42eb-a5ae-041c4bfc01ea-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.579812 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njt9z\" (UniqueName: \"kubernetes.io/projected/70292f47-9494-42eb-a5ae-041c4bfc01ea-kube-api-access-njt9z\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.579827 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70292f47-9494-42eb-a5ae-041c4bfc01ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.837478 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pdj8h" event={"ID":"70292f47-9494-42eb-a5ae-041c4bfc01ea","Type":"ContainerDied","Data":"a0fd5c79474fe4fcd19261b3e42f0e45130c663ecdc83932a08d03175a4946ec"} Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.837552 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0fd5c79474fe4fcd19261b3e42f0e45130c663ecdc83932a08d03175a4946ec" Feb 18 12:07:57 crc kubenswrapper[4717]: I0218 12:07:57.837639 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pdj8h" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.782176 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 12:07:58 crc kubenswrapper[4717]: E0218 12:07:58.783155 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70292f47-9494-42eb-a5ae-041c4bfc01ea" containerName="cinder-db-sync" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.783172 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="70292f47-9494-42eb-a5ae-041c4bfc01ea" containerName="cinder-db-sync" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.783418 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="70292f47-9494-42eb-a5ae-041c4bfc01ea" containerName="cinder-db-sync" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.784634 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.791846 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kwqwh" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.792092 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.792169 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.792229 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.811234 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.836122 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ppd2g"] Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.915249 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.915359 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5971f764-400d-4e52-8d75-53bd7384648b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.915415 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.915442 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-scripts\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.915488 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.915538 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq8kh\" (UniqueName: \"kubernetes.io/projected/5971f764-400d-4e52-8d75-53bd7384648b-kube-api-access-lq8kh\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.959112 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-mmbsj"] Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.965458 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:58 crc kubenswrapper[4717]: I0218 12:07:58.989407 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-mmbsj"] Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.017784 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.018380 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5971f764-400d-4e52-8d75-53bd7384648b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.018606 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.018720 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-scripts\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.018855 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.018978 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq8kh\" (UniqueName: \"kubernetes.io/projected/5971f764-400d-4e52-8d75-53bd7384648b-kube-api-access-lq8kh\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.018548 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5971f764-400d-4e52-8d75-53bd7384648b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.034863 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.046473 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.060064 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.064141 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.112350 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.112881 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq8kh\" (UniqueName: \"kubernetes.io/projected/5971f764-400d-4e52-8d75-53bd7384648b-kube-api-access-lq8kh\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.114034 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.116621 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-scripts\") pod \"cinder-scheduler-0\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.122987 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123154 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123200 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123273 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86rv\" (UniqueName: \"kubernetes.io/projected/a718c21d-5293-46fb-ba9b-777285ff3fb8-kube-api-access-f86rv\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123337 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-config\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123389 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123414 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123433 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data-custom\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123460 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123517 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a718c21d-5293-46fb-ba9b-777285ff3fb8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123543 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fk4q\" (UniqueName: \"kubernetes.io/projected/40820fca-bd29-4dfa-bfb3-04a2209eee32-kube-api-access-2fk4q\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123623 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-scripts\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.123643 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a718c21d-5293-46fb-ba9b-777285ff3fb8-logs\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.140128 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.186529 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.225726 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.225825 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.225893 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f86rv\" (UniqueName: \"kubernetes.io/projected/a718c21d-5293-46fb-ba9b-777285ff3fb8-kube-api-access-f86rv\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.225950 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-config\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.225983 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.226005 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.226021 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data-custom\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.226044 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.226083 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a718c21d-5293-46fb-ba9b-777285ff3fb8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.226105 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fk4q\" (UniqueName: \"kubernetes.io/projected/40820fca-bd29-4dfa-bfb3-04a2209eee32-kube-api-access-2fk4q\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.226164 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-scripts\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.226194 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a718c21d-5293-46fb-ba9b-777285ff3fb8-logs\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.226223 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.228137 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.231098 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.232609 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a718c21d-5293-46fb-ba9b-777285ff3fb8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.237936 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-scripts\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.238724 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a718c21d-5293-46fb-ba9b-777285ff3fb8-logs\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.239508 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-config\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.240576 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.241120 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.262080 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.266497 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.270119 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data-custom\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.273665 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fk4q\" (UniqueName: \"kubernetes.io/projected/40820fca-bd29-4dfa-bfb3-04a2209eee32-kube-api-access-2fk4q\") pod \"dnsmasq-dns-5c9776ccc5-mmbsj\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.274914 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f86rv\" (UniqueName: \"kubernetes.io/projected/a718c21d-5293-46fb-ba9b-777285ff3fb8-kube-api-access-f86rv\") pod \"cinder-api-0\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.320847 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.502528 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 12:07:59 crc kubenswrapper[4717]: I0218 12:07:59.949427 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55d8f77d98-h2lh4" Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.023034 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f66d9f78-rmqt2"] Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.023355 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f66d9f78-rmqt2" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerName="barbican-api-log" containerID="cri-o://054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520" gracePeriod=30 Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.023846 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f66d9f78-rmqt2" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerName="barbican-api" containerID="cri-o://8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd" gracePeriod=30 Feb 18 12:08:00 crc kubenswrapper[4717]: E0218 12:08:00.477426 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 18 12:08:00 crc kubenswrapper[4717]: E0218 12:08:00.477718 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8x7f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce4f631a-18d2-46ea-b1e1-17b26808f94d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 12:08:00 crc kubenswrapper[4717]: E0218 12:08:00.478985 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.915998 4717 generic.go:334] "Generic (PLEG): container finished" podID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" containerID="901fc8a39c7615d205a0798f618d24c53a093eafe3f22eada226567a24679f85" exitCode=137 Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.917214 4717 generic.go:334] "Generic (PLEG): container finished" podID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" containerID="abd1180d7054dc275a08de443fe1297b7d43f2d6a025de0140724414ed5ff445" exitCode=137 Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.916061 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b69c68f97-t72j4" event={"ID":"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8","Type":"ContainerDied","Data":"901fc8a39c7615d205a0798f618d24c53a093eafe3f22eada226567a24679f85"} Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.917360 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b69c68f97-t72j4" event={"ID":"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8","Type":"ContainerDied","Data":"abd1180d7054dc275a08de443fe1297b7d43f2d6a025de0140724414ed5ff445"} Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.920132 4717 generic.go:334] "Generic (PLEG): container finished" podID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerID="054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520" exitCode=143 Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.920361 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" containerName="ceilometer-notification-agent" containerID="cri-o://ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a" gracePeriod=30 Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.920736 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f66d9f78-rmqt2" event={"ID":"08ddc7c0-43cb-4abd-bcab-b2998ffbae42","Type":"ContainerDied","Data":"054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520"} Feb 18 12:08:00 crc kubenswrapper[4717]: I0218 12:08:00.921115 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" containerName="sg-core" containerID="cri-o://353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937" gracePeriod=30 Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.296192 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.328946 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bf89fd777-rchb4"] Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.615880 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 12:08:01 crc kubenswrapper[4717]: W0218 12:08:01.684548 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5971f764_400d_4e52_8d75_53bd7384648b.slice/crio-d1e0b2148f0d54de5e6fa366b5156f9ef7fdfa10f9f706479c5e78fb8eb714c4 WatchSource:0}: Error finding container d1e0b2148f0d54de5e6fa366b5156f9ef7fdfa10f9f706479c5e78fb8eb714c4: Status 404 returned error can't find the container with id d1e0b2148f0d54de5e6fa366b5156f9ef7fdfa10f9f706479c5e78fb8eb714c4 Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.737446 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.750441 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-mmbsj"] Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.848702 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.920115 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4dpb\" (UniqueName: \"kubernetes.io/projected/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-kube-api-access-w4dpb\") pod \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.920190 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-horizon-secret-key\") pod \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.920240 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-scripts\") pod \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.920353 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-logs\") pod \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.920418 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-config-data\") pod \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\" (UID: \"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8\") " Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.923280 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-logs" (OuterVolumeSpecName: "logs") pod "d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" (UID: "d139d1d1-94c1-492e-9fe6-7d2a6e6497f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.932703 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" (UID: "d139d1d1-94c1-492e-9fe6-7d2a6e6497f8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.963357 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-kube-api-access-w4dpb" (OuterVolumeSpecName: "kube-api-access-w4dpb") pod "d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" (UID: "d139d1d1-94c1-492e-9fe6-7d2a6e6497f8"). InnerVolumeSpecName "kube-api-access-w4dpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:01 crc kubenswrapper[4717]: I0218 12:08:01.970656 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a718c21d-5293-46fb-ba9b-777285ff3fb8","Type":"ContainerStarted","Data":"7a7e69162928910e4d276fe31e60c255960abe545389e538b0126404d6df4ec9"} Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.030601 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.030856 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4dpb\" (UniqueName: \"kubernetes.io/projected/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-kube-api-access-w4dpb\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.030895 4717 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.046993 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" podUID="0a03e161-b29e-4556-b3c2-890a4ecf2885" containerName="dnsmasq-dns" containerID="cri-o://d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af" gracePeriod=10 Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.046969 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" event={"ID":"0a03e161-b29e-4556-b3c2-890a4ecf2885","Type":"ContainerStarted","Data":"d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af"} Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.047547 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.051738 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" event={"ID":"40820fca-bd29-4dfa-bfb3-04a2209eee32","Type":"ContainerStarted","Data":"cd53a54b50f43063bedcdcad32742cf8e677e7a841580e6af8937f5326e79f25"} Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.064583 4717 generic.go:334] "Generic (PLEG): container finished" podID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" containerID="353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937" exitCode=2 Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.064752 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce4f631a-18d2-46ea-b1e1-17b26808f94d","Type":"ContainerDied","Data":"353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937"} Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.072790 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bf89fd777-rchb4" event={"ID":"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c","Type":"ContainerStarted","Data":"24bab47beab1fe14b2e02166a87532b4d6e2e7cdff0ff5087173a9a3e1bc2a78"} Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.088771 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" podStartSLOduration=14.088720582 podStartE2EDuration="14.088720582s" podCreationTimestamp="2026-02-18 12:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:02.071539113 +0000 UTC m=+1116.473640429" watchObservedRunningTime="2026-02-18 12:08:02.088720582 +0000 UTC m=+1116.490821898" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.089762 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8584d7b78b-d8rzh" event={"ID":"46f23333-73d2-4175-85f5-9ceb356a42ad","Type":"ContainerStarted","Data":"8a9d28e4ecbfa886e508d891e0c6307e24cc6e57ed1a981a3dfd0784dac07955"} Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.090304 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.135935 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b69c68f97-t72j4" event={"ID":"d139d1d1-94c1-492e-9fe6-7d2a6e6497f8","Type":"ContainerDied","Data":"a6fd1cc7cb12b10fdcf76f6aa8930acd66dc564b2a82d13a9224b21d795aee7e"} Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.136023 4717 scope.go:117] "RemoveContainer" containerID="901fc8a39c7615d205a0798f618d24c53a093eafe3f22eada226567a24679f85" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.136246 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b69c68f97-t72j4" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.155033 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8584d7b78b-d8rzh" podStartSLOduration=14.155001568 podStartE2EDuration="14.155001568s" podCreationTimestamp="2026-02-18 12:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:02.125524812 +0000 UTC m=+1116.527626148" watchObservedRunningTime="2026-02-18 12:08:02.155001568 +0000 UTC m=+1116.557102884" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.230677 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5971f764-400d-4e52-8d75-53bd7384648b","Type":"ContainerStarted","Data":"d1e0b2148f0d54de5e6fa366b5156f9ef7fdfa10f9f706479c5e78fb8eb714c4"} Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.363863 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-config-data" (OuterVolumeSpecName: "config-data") pod "d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" (UID: "d139d1d1-94c1-492e-9fe6-7d2a6e6497f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.369696 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-scripts" (OuterVolumeSpecName: "scripts") pod "d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" (UID: "d139d1d1-94c1-492e-9fe6-7d2a6e6497f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.404773 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.404916 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.603222 4717 scope.go:117] "RemoveContainer" containerID="abd1180d7054dc275a08de443fe1297b7d43f2d6a025de0140724414ed5ff445" Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.632048 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b69c68f97-t72j4"] Feb 18 12:08:02 crc kubenswrapper[4717]: I0218 12:08:02.720252 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b69c68f97-t72j4"] Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.084199 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" path="/var/lib/kubelet/pods/d139d1d1-94c1-492e-9fe6-7d2a6e6497f8/volumes" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.182804 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.208812 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b5f4c76fb-t68w8" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.208902 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.210413 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce"} pod="openstack/horizon-6b5f4c76fb-t68w8" containerMessage="Container horizon failed startup probe, will be restarted" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.210484 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b5f4c76fb-t68w8" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" containerID="cri-o://0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce" gracePeriod=30 Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.275558 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.294962 4717 generic.go:334] "Generic (PLEG): container finished" podID="0a03e161-b29e-4556-b3c2-890a4ecf2885" containerID="d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af" exitCode=0 Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.295117 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" event={"ID":"0a03e161-b29e-4556-b3c2-890a4ecf2885","Type":"ContainerDied","Data":"d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af"} Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.295173 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" event={"ID":"0a03e161-b29e-4556-b3c2-890a4ecf2885","Type":"ContainerDied","Data":"f7f51fce7c3629c411a779a3cdef10ae8d3757378427f8be4c630c8477e7977f"} Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.295206 4717 scope.go:117] "RemoveContainer" containerID="d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.295478 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ppd2g" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.315995 4717 generic.go:334] "Generic (PLEG): container finished" podID="40820fca-bd29-4dfa-bfb3-04a2209eee32" containerID="de6fdc4ab3e16f415093c32892d08f9c29a89ce45dfa06b8951e342d90b64e3e" exitCode=0 Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.316116 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" event={"ID":"40820fca-bd29-4dfa-bfb3-04a2209eee32","Type":"ContainerDied","Data":"de6fdc4ab3e16f415093c32892d08f9c29a89ce45dfa06b8951e342d90b64e3e"} Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.367840 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-config-data\") pod \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.367934 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-config\") pod \"0a03e161-b29e-4556-b3c2-890a4ecf2885\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.367968 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-scripts\") pod \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368032 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-sb\") pod \"0a03e161-b29e-4556-b3c2-890a4ecf2885\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368043 4717 generic.go:334] "Generic (PLEG): container finished" podID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" containerID="ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a" exitCode=0 Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368095 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-swift-storage-0\") pod \"0a03e161-b29e-4556-b3c2-890a4ecf2885\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368149 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce4f631a-18d2-46ea-b1e1-17b26808f94d","Type":"ContainerDied","Data":"ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a"} Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368184 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce4f631a-18d2-46ea-b1e1-17b26808f94d","Type":"ContainerDied","Data":"e04b7e9029bb59345f9cccdbe9516bd20aa0909d1aaa533f14d7fe3f5e899a4f"} Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368286 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368294 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-sg-core-conf-yaml\") pod \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368331 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x7f2\" (UniqueName: \"kubernetes.io/projected/ce4f631a-18d2-46ea-b1e1-17b26808f94d-kube-api-access-8x7f2\") pod \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368372 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-combined-ca-bundle\") pod \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368804 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-log-httpd\") pod \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368837 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-svc\") pod \"0a03e161-b29e-4556-b3c2-890a4ecf2885\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368956 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-nb\") pod \"0a03e161-b29e-4556-b3c2-890a4ecf2885\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.368987 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvsls\" (UniqueName: \"kubernetes.io/projected/0a03e161-b29e-4556-b3c2-890a4ecf2885-kube-api-access-kvsls\") pod \"0a03e161-b29e-4556-b3c2-890a4ecf2885\" (UID: \"0a03e161-b29e-4556-b3c2-890a4ecf2885\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.369062 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-run-httpd\") pod \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\" (UID: \"ce4f631a-18d2-46ea-b1e1-17b26808f94d\") " Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.374047 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce4f631a-18d2-46ea-b1e1-17b26808f94d" (UID: "ce4f631a-18d2-46ea-b1e1-17b26808f94d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.374467 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce4f631a-18d2-46ea-b1e1-17b26808f94d" (UID: "ce4f631a-18d2-46ea-b1e1-17b26808f94d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.402020 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-scripts" (OuterVolumeSpecName: "scripts") pod "ce4f631a-18d2-46ea-b1e1-17b26808f94d" (UID: "ce4f631a-18d2-46ea-b1e1-17b26808f94d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.406655 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bf89fd777-rchb4" event={"ID":"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c","Type":"ContainerStarted","Data":"ff6311a3b91a4923c404a2cd9fff199cdcafe81521b574445569fa344f88e4b1"} Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.413707 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a03e161-b29e-4556-b3c2-890a4ecf2885-kube-api-access-kvsls" (OuterVolumeSpecName: "kube-api-access-kvsls") pod "0a03e161-b29e-4556-b3c2-890a4ecf2885" (UID: "0a03e161-b29e-4556-b3c2-890a4ecf2885"). InnerVolumeSpecName "kube-api-access-kvsls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.413927 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4f631a-18d2-46ea-b1e1-17b26808f94d-kube-api-access-8x7f2" (OuterVolumeSpecName: "kube-api-access-8x7f2") pod "ce4f631a-18d2-46ea-b1e1-17b26808f94d" (UID: "ce4f631a-18d2-46ea-b1e1-17b26808f94d"). InnerVolumeSpecName "kube-api-access-8x7f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.456632 4717 scope.go:117] "RemoveContainer" containerID="4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.458693 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a718c21d-5293-46fb-ba9b-777285ff3fb8","Type":"ContainerStarted","Data":"643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020"} Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.472942 4717 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.472984 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.472997 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x7f2\" (UniqueName: \"kubernetes.io/projected/ce4f631a-18d2-46ea-b1e1-17b26808f94d-kube-api-access-8x7f2\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.473010 4717 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce4f631a-18d2-46ea-b1e1-17b26808f94d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.473024 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvsls\" (UniqueName: \"kubernetes.io/projected/0a03e161-b29e-4556-b3c2-890a4ecf2885-kube-api-access-kvsls\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.515286 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a03e161-b29e-4556-b3c2-890a4ecf2885" (UID: "0a03e161-b29e-4556-b3c2-890a4ecf2885"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.516984 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce4f631a-18d2-46ea-b1e1-17b26808f94d" (UID: "ce4f631a-18d2-46ea-b1e1-17b26808f94d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.532116 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce4f631a-18d2-46ea-b1e1-17b26808f94d" (UID: "ce4f631a-18d2-46ea-b1e1-17b26808f94d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.568440 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0a03e161-b29e-4556-b3c2-890a4ecf2885" (UID: "0a03e161-b29e-4556-b3c2-890a4ecf2885"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.572478 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0a03e161-b29e-4556-b3c2-890a4ecf2885" (UID: "0a03e161-b29e-4556-b3c2-890a4ecf2885"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.575183 4717 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.575585 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.575724 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.575815 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.575902 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.601164 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-config" (OuterVolumeSpecName: "config") pod "0a03e161-b29e-4556-b3c2-890a4ecf2885" (UID: "0a03e161-b29e-4556-b3c2-890a4ecf2885"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.610668 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-config-data" (OuterVolumeSpecName: "config-data") pod "ce4f631a-18d2-46ea-b1e1-17b26808f94d" (UID: "ce4f631a-18d2-46ea-b1e1-17b26808f94d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.623111 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f66d9f78-rmqt2" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:45774->10.217.0.160:9311: read: connection reset by peer" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.623730 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f66d9f78-rmqt2" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:45776->10.217.0.160:9311: read: connection reset by peer" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.641212 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0a03e161-b29e-4556-b3c2-890a4ecf2885" (UID: "0a03e161-b29e-4556-b3c2-890a4ecf2885"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.679557 4717 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.679599 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce4f631a-18d2-46ea-b1e1-17b26808f94d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.679612 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a03e161-b29e-4556-b3c2-890a4ecf2885-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.909535 4717 scope.go:117] "RemoveContainer" containerID="d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af" Feb 18 12:08:03 crc kubenswrapper[4717]: E0218 12:08:03.914540 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af\": container with ID starting with d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af not found: ID does not exist" containerID="d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.914649 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af"} err="failed to get container status \"d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af\": rpc error: code = NotFound desc = could not find container \"d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af\": container with ID starting with d1f77125baef88471a14ee73a39e7c8a56aa97378b819447b9b05533bb1282af not found: ID does not exist" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.914694 4717 scope.go:117] "RemoveContainer" containerID="4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3" Feb 18 12:08:03 crc kubenswrapper[4717]: E0218 12:08:03.918307 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3\": container with ID starting with 4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3 not found: ID does not exist" containerID="4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.918377 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3"} err="failed to get container status \"4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3\": rpc error: code = NotFound desc = could not find container \"4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3\": container with ID starting with 4016b4b42fdaf62d554c295a3e263c33fa10ad02124e9b4caaa7f79cc62415b3 not found: ID does not exist" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.918418 4717 scope.go:117] "RemoveContainer" containerID="353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.939002 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.950495 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.960764 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ppd2g"] Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.972358 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:03 crc kubenswrapper[4717]: E0218 12:08:03.972913 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" containerName="sg-core" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.972940 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" containerName="sg-core" Feb 18 12:08:03 crc kubenswrapper[4717]: E0218 12:08:03.972951 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a03e161-b29e-4556-b3c2-890a4ecf2885" containerName="dnsmasq-dns" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.972958 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a03e161-b29e-4556-b3c2-890a4ecf2885" containerName="dnsmasq-dns" Feb 18 12:08:03 crc kubenswrapper[4717]: E0218 12:08:03.972987 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" containerName="horizon-log" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.972994 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" containerName="horizon-log" Feb 18 12:08:03 crc kubenswrapper[4717]: E0218 12:08:03.973015 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" containerName="ceilometer-notification-agent" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.973021 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" containerName="ceilometer-notification-agent" Feb 18 12:08:03 crc kubenswrapper[4717]: E0218 12:08:03.973037 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a03e161-b29e-4556-b3c2-890a4ecf2885" containerName="init" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.973045 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a03e161-b29e-4556-b3c2-890a4ecf2885" containerName="init" Feb 18 12:08:03 crc kubenswrapper[4717]: E0218 12:08:03.973063 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" containerName="horizon" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.973071 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" containerName="horizon" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.973248 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a03e161-b29e-4556-b3c2-890a4ecf2885" containerName="dnsmasq-dns" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.973279 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" containerName="sg-core" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.973287 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" containerName="horizon-log" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.973302 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" containerName="ceilometer-notification-agent" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.973312 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="d139d1d1-94c1-492e-9fe6-7d2a6e6497f8" containerName="horizon" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.979014 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.983794 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.984072 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.986934 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ppd2g"] Feb 18 12:08:03 crc kubenswrapper[4717]: I0218 12:08:03.994541 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.029668 4717 scope.go:117] "RemoveContainer" containerID="ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.094850 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-run-httpd\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.094919 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.094965 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-config-data\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.094993 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgt8s\" (UniqueName: \"kubernetes.io/projected/365febb7-ab5e-41fd-ad78-e6209e35653e-kube-api-access-rgt8s\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.095045 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-scripts\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.095298 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-log-httpd\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.095346 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.202843 4717 scope.go:117] "RemoveContainer" containerID="353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937" Feb 18 12:08:04 crc kubenswrapper[4717]: E0218 12:08:04.205023 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937\": container with ID starting with 353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937 not found: ID does not exist" containerID="353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.205058 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937"} err="failed to get container status \"353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937\": rpc error: code = NotFound desc = could not find container \"353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937\": container with ID starting with 353e55e6efec9856f25670a92eeb6aa76869217f096491cc5be61ef688fbd937 not found: ID does not exist" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.205084 4717 scope.go:117] "RemoveContainer" containerID="ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a" Feb 18 12:08:04 crc kubenswrapper[4717]: E0218 12:08:04.208586 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a\": container with ID starting with ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a not found: ID does not exist" containerID="ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.208705 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a"} err="failed to get container status \"ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a\": rpc error: code = NotFound desc = could not find container \"ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a\": container with ID starting with ef6a127c77e4b240564a23fbe9df66381c84028e8bb763d20f8050cd35a5847a not found: ID does not exist" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.258057 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-scripts\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.261725 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-log-httpd\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.261861 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.262229 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-run-httpd\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.262465 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.262599 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgt8s\" (UniqueName: \"kubernetes.io/projected/365febb7-ab5e-41fd-ad78-e6209e35653e-kube-api-access-rgt8s\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.262706 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-config-data\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.266785 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-run-httpd\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.270868 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-log-httpd\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.273861 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.275202 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-config-data\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.276280 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.276465 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-scripts\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.292169 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgt8s\" (UniqueName: \"kubernetes.io/projected/365febb7-ab5e-41fd-ad78-e6209e35653e-kube-api-access-rgt8s\") pod \"ceilometer-0\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.316772 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.334536 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.468297 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data\") pod \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.468382 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-combined-ca-bundle\") pod \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.468490 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnqvl\" (UniqueName: \"kubernetes.io/projected/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-kube-api-access-mnqvl\") pod \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.468560 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data-custom\") pod \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.468614 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-logs\") pod \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\" (UID: \"08ddc7c0-43cb-4abd-bcab-b2998ffbae42\") " Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.469688 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-logs" (OuterVolumeSpecName: "logs") pod "08ddc7c0-43cb-4abd-bcab-b2998ffbae42" (UID: "08ddc7c0-43cb-4abd-bcab-b2998ffbae42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.479239 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08ddc7c0-43cb-4abd-bcab-b2998ffbae42" (UID: "08ddc7c0-43cb-4abd-bcab-b2998ffbae42"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.480519 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-kube-api-access-mnqvl" (OuterVolumeSpecName: "kube-api-access-mnqvl") pod "08ddc7c0-43cb-4abd-bcab-b2998ffbae42" (UID: "08ddc7c0-43cb-4abd-bcab-b2998ffbae42"). InnerVolumeSpecName "kube-api-access-mnqvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.496835 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" event={"ID":"40820fca-bd29-4dfa-bfb3-04a2209eee32","Type":"ContainerStarted","Data":"194a4b144ace41854919e7b492b867efb56c6e1652ddaf14eddd579e9b1c39ee"} Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.497302 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.504777 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bf89fd777-rchb4" event={"ID":"a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c","Type":"ContainerStarted","Data":"88683e641f5df344a84cd898bf41aeecab90caa0ac58ca772229e6e1dfdfb5d8"} Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.506319 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.509471 4717 generic.go:334] "Generic (PLEG): container finished" podID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerID="8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd" exitCode=0 Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.509542 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f66d9f78-rmqt2" event={"ID":"08ddc7c0-43cb-4abd-bcab-b2998ffbae42","Type":"ContainerDied","Data":"8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd"} Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.509572 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f66d9f78-rmqt2" event={"ID":"08ddc7c0-43cb-4abd-bcab-b2998ffbae42","Type":"ContainerDied","Data":"e168fda5423430464fe9e40d6d34021ab329759f5fe243c33abb30e91b07c1a8"} Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.509592 4717 scope.go:117] "RemoveContainer" containerID="8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.509706 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f66d9f78-rmqt2" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.528441 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" podStartSLOduration=6.528418682 podStartE2EDuration="6.528418682s" podCreationTimestamp="2026-02-18 12:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:04.519180403 +0000 UTC m=+1118.921281719" watchObservedRunningTime="2026-02-18 12:08:04.528418682 +0000 UTC m=+1118.930519998" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.540423 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08ddc7c0-43cb-4abd-bcab-b2998ffbae42" (UID: "08ddc7c0-43cb-4abd-bcab-b2998ffbae42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.560204 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7bf89fd777-rchb4" podStartSLOduration=12.560170864 podStartE2EDuration="12.560170864s" podCreationTimestamp="2026-02-18 12:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:04.544723535 +0000 UTC m=+1118.946824851" watchObservedRunningTime="2026-02-18 12:08:04.560170864 +0000 UTC m=+1118.962272190" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.571199 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.571232 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.571243 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnqvl\" (UniqueName: \"kubernetes.io/projected/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-kube-api-access-mnqvl\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.571274 4717 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.629304 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data" (OuterVolumeSpecName: "config-data") pod "08ddc7c0-43cb-4abd-bcab-b2998ffbae42" (UID: "08ddc7c0-43cb-4abd-bcab-b2998ffbae42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.673578 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08ddc7c0-43cb-4abd-bcab-b2998ffbae42-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.751575 4717 scope.go:117] "RemoveContainer" containerID="054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.790493 4717 scope.go:117] "RemoveContainer" containerID="8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd" Feb 18 12:08:04 crc kubenswrapper[4717]: E0218 12:08:04.791518 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd\": container with ID starting with 8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd not found: ID does not exist" containerID="8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.791561 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd"} err="failed to get container status \"8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd\": rpc error: code = NotFound desc = could not find container \"8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd\": container with ID starting with 8d8075b9210ccaa3ba9dd18beecb7d9e972c26e46ec4108e11da061b5be0fcbd not found: ID does not exist" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.791595 4717 scope.go:117] "RemoveContainer" containerID="054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520" Feb 18 12:08:04 crc kubenswrapper[4717]: E0218 12:08:04.792305 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520\": container with ID starting with 054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520 not found: ID does not exist" containerID="054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.792348 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520"} err="failed to get container status \"054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520\": rpc error: code = NotFound desc = could not find container \"054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520\": container with ID starting with 054e0eb3af972319c506fbf2a6945db5622def0807a258a741b64e0bbb7f9520 not found: ID does not exist" Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.860802 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f66d9f78-rmqt2"] Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.870808 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7f66d9f78-rmqt2"] Feb 18 12:08:04 crc kubenswrapper[4717]: I0218 12:08:04.914660 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:04 crc kubenswrapper[4717]: W0218 12:08:04.932423 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod365febb7_ab5e_41fd_ad78_e6209e35653e.slice/crio-6b95d8e042e19cef7c10a8c0120a6049a0647f4374d08de14357652f0ad6e30e WatchSource:0}: Error finding container 6b95d8e042e19cef7c10a8c0120a6049a0647f4374d08de14357652f0ad6e30e: Status 404 returned error can't find the container with id 6b95d8e042e19cef7c10a8c0120a6049a0647f4374d08de14357652f0ad6e30e Feb 18 12:08:05 crc kubenswrapper[4717]: I0218 12:08:05.050415 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" path="/var/lib/kubelet/pods/08ddc7c0-43cb-4abd-bcab-b2998ffbae42/volumes" Feb 18 12:08:05 crc kubenswrapper[4717]: I0218 12:08:05.051346 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a03e161-b29e-4556-b3c2-890a4ecf2885" path="/var/lib/kubelet/pods/0a03e161-b29e-4556-b3c2-890a4ecf2885/volumes" Feb 18 12:08:05 crc kubenswrapper[4717]: I0218 12:08:05.052042 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4f631a-18d2-46ea-b1e1-17b26808f94d" path="/var/lib/kubelet/pods/ce4f631a-18d2-46ea-b1e1-17b26808f94d/volumes" Feb 18 12:08:05 crc kubenswrapper[4717]: I0218 12:08:05.552009 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a718c21d-5293-46fb-ba9b-777285ff3fb8","Type":"ContainerStarted","Data":"d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2"} Feb 18 12:08:05 crc kubenswrapper[4717]: I0218 12:08:05.552429 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a718c21d-5293-46fb-ba9b-777285ff3fb8" containerName="cinder-api-log" containerID="cri-o://643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020" gracePeriod=30 Feb 18 12:08:05 crc kubenswrapper[4717]: I0218 12:08:05.552652 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a718c21d-5293-46fb-ba9b-777285ff3fb8" containerName="cinder-api" containerID="cri-o://d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2" gracePeriod=30 Feb 18 12:08:05 crc kubenswrapper[4717]: I0218 12:08:05.552805 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 12:08:05 crc kubenswrapper[4717]: I0218 12:08:05.562561 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365febb7-ab5e-41fd-ad78-e6209e35653e","Type":"ContainerStarted","Data":"6b95d8e042e19cef7c10a8c0120a6049a0647f4374d08de14357652f0ad6e30e"} Feb 18 12:08:05 crc kubenswrapper[4717]: I0218 12:08:05.569514 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5971f764-400d-4e52-8d75-53bd7384648b","Type":"ContainerStarted","Data":"81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba"} Feb 18 12:08:05 crc kubenswrapper[4717]: I0218 12:08:05.575519 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.575486221 podStartE2EDuration="7.575486221s" podCreationTimestamp="2026-02-18 12:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:05.575413558 +0000 UTC m=+1119.977514874" watchObservedRunningTime="2026-02-18 12:08:05.575486221 +0000 UTC m=+1119.977587537" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.202728 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.315985 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-scripts\") pod \"a718c21d-5293-46fb-ba9b-777285ff3fb8\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.316093 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f86rv\" (UniqueName: \"kubernetes.io/projected/a718c21d-5293-46fb-ba9b-777285ff3fb8-kube-api-access-f86rv\") pod \"a718c21d-5293-46fb-ba9b-777285ff3fb8\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.316121 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a718c21d-5293-46fb-ba9b-777285ff3fb8-logs\") pod \"a718c21d-5293-46fb-ba9b-777285ff3fb8\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.316147 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a718c21d-5293-46fb-ba9b-777285ff3fb8-etc-machine-id\") pod \"a718c21d-5293-46fb-ba9b-777285ff3fb8\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.316183 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data-custom\") pod \"a718c21d-5293-46fb-ba9b-777285ff3fb8\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.316246 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-combined-ca-bundle\") pod \"a718c21d-5293-46fb-ba9b-777285ff3fb8\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.316352 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data\") pod \"a718c21d-5293-46fb-ba9b-777285ff3fb8\" (UID: \"a718c21d-5293-46fb-ba9b-777285ff3fb8\") " Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.316903 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a718c21d-5293-46fb-ba9b-777285ff3fb8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a718c21d-5293-46fb-ba9b-777285ff3fb8" (UID: "a718c21d-5293-46fb-ba9b-777285ff3fb8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.317075 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a718c21d-5293-46fb-ba9b-777285ff3fb8-logs" (OuterVolumeSpecName: "logs") pod "a718c21d-5293-46fb-ba9b-777285ff3fb8" (UID: "a718c21d-5293-46fb-ba9b-777285ff3fb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.318473 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a718c21d-5293-46fb-ba9b-777285ff3fb8-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.318519 4717 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a718c21d-5293-46fb-ba9b-777285ff3fb8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.324368 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-scripts" (OuterVolumeSpecName: "scripts") pod "a718c21d-5293-46fb-ba9b-777285ff3fb8" (UID: "a718c21d-5293-46fb-ba9b-777285ff3fb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.324844 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a718c21d-5293-46fb-ba9b-777285ff3fb8" (UID: "a718c21d-5293-46fb-ba9b-777285ff3fb8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.325167 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a718c21d-5293-46fb-ba9b-777285ff3fb8-kube-api-access-f86rv" (OuterVolumeSpecName: "kube-api-access-f86rv") pod "a718c21d-5293-46fb-ba9b-777285ff3fb8" (UID: "a718c21d-5293-46fb-ba9b-777285ff3fb8"). InnerVolumeSpecName "kube-api-access-f86rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.357540 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a718c21d-5293-46fb-ba9b-777285ff3fb8" (UID: "a718c21d-5293-46fb-ba9b-777285ff3fb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.374585 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data" (OuterVolumeSpecName: "config-data") pod "a718c21d-5293-46fb-ba9b-777285ff3fb8" (UID: "a718c21d-5293-46fb-ba9b-777285ff3fb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.420647 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.420736 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f86rv\" (UniqueName: \"kubernetes.io/projected/a718c21d-5293-46fb-ba9b-777285ff3fb8-kube-api-access-f86rv\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.420752 4717 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.420764 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.420780 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a718c21d-5293-46fb-ba9b-777285ff3fb8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.605109 4717 generic.go:334] "Generic (PLEG): container finished" podID="a718c21d-5293-46fb-ba9b-777285ff3fb8" containerID="d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2" exitCode=0 Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.605154 4717 generic.go:334] "Generic (PLEG): container finished" podID="a718c21d-5293-46fb-ba9b-777285ff3fb8" containerID="643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020" exitCode=143 Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.605247 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a718c21d-5293-46fb-ba9b-777285ff3fb8","Type":"ContainerDied","Data":"d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2"} Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.605375 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a718c21d-5293-46fb-ba9b-777285ff3fb8","Type":"ContainerDied","Data":"643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020"} Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.605392 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a718c21d-5293-46fb-ba9b-777285ff3fb8","Type":"ContainerDied","Data":"7a7e69162928910e4d276fe31e60c255960abe545389e538b0126404d6df4ec9"} Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.605408 4717 scope.go:117] "RemoveContainer" containerID="d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.606241 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.628577 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365febb7-ab5e-41fd-ad78-e6209e35653e","Type":"ContainerStarted","Data":"cdd3fc40499b9d917654397de25d20ab1cd4ae675735a4b0b9468346d147399e"} Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.633856 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.642844 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5971f764-400d-4e52-8d75-53bd7384648b","Type":"ContainerStarted","Data":"4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2"} Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.737121 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.913785098 podStartE2EDuration="8.73645453s" podCreationTimestamp="2026-02-18 12:07:58 +0000 UTC" firstStartedPulling="2026-02-18 12:08:01.708736174 +0000 UTC m=+1116.110837490" lastFinishedPulling="2026-02-18 12:08:03.531405606 +0000 UTC m=+1117.933506922" observedRunningTime="2026-02-18 12:08:06.722674749 +0000 UTC m=+1121.124776105" watchObservedRunningTime="2026-02-18 12:08:06.73645453 +0000 UTC m=+1121.138555846" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.758636 4717 scope.go:117] "RemoveContainer" containerID="643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.793552 4717 scope.go:117] "RemoveContainer" containerID="d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2" Feb 18 12:08:06 crc kubenswrapper[4717]: E0218 12:08:06.794175 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2\": container with ID starting with d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2 not found: ID does not exist" containerID="d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.794219 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2"} err="failed to get container status \"d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2\": rpc error: code = NotFound desc = could not find container \"d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2\": container with ID starting with d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2 not found: ID does not exist" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.794285 4717 scope.go:117] "RemoveContainer" containerID="643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020" Feb 18 12:08:06 crc kubenswrapper[4717]: E0218 12:08:06.795018 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020\": container with ID starting with 643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020 not found: ID does not exist" containerID="643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.795098 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020"} err="failed to get container status \"643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020\": rpc error: code = NotFound desc = could not find container \"643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020\": container with ID starting with 643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020 not found: ID does not exist" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.795149 4717 scope.go:117] "RemoveContainer" containerID="d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.797000 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2"} err="failed to get container status \"d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2\": rpc error: code = NotFound desc = could not find container \"d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2\": container with ID starting with d214b9a2b7abefd1917d1aa4f27de8f0e0303f5557da9c5383d4d2585abf68c2 not found: ID does not exist" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.797037 4717 scope.go:117] "RemoveContainer" containerID="643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.797121 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.797415 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020"} err="failed to get container status \"643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020\": rpc error: code = NotFound desc = could not find container \"643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020\": container with ID starting with 643b8254b6e1ff294d3a116f09bdf9330dc136af2e48bb475f27c453b683e020 not found: ID does not exist" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.810869 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.826373 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 12:08:06 crc kubenswrapper[4717]: E0218 12:08:06.827068 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerName="barbican-api" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.827102 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerName="barbican-api" Feb 18 12:08:06 crc kubenswrapper[4717]: E0218 12:08:06.827129 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerName="barbican-api-log" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.827141 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerName="barbican-api-log" Feb 18 12:08:06 crc kubenswrapper[4717]: E0218 12:08:06.827167 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a718c21d-5293-46fb-ba9b-777285ff3fb8" containerName="cinder-api-log" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.827174 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a718c21d-5293-46fb-ba9b-777285ff3fb8" containerName="cinder-api-log" Feb 18 12:08:06 crc kubenswrapper[4717]: E0218 12:08:06.827192 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a718c21d-5293-46fb-ba9b-777285ff3fb8" containerName="cinder-api" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.827199 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a718c21d-5293-46fb-ba9b-777285ff3fb8" containerName="cinder-api" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.827493 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a718c21d-5293-46fb-ba9b-777285ff3fb8" containerName="cinder-api-log" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.827517 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a718c21d-5293-46fb-ba9b-777285ff3fb8" containerName="cinder-api" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.827543 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerName="barbican-api" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.827555 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ddc7c0-43cb-4abd-bcab-b2998ffbae42" containerName="barbican-api-log" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.829147 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.837598 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.837866 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.837977 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.847491 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.939689 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.939751 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-config-data\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.939784 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-config-data-custom\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.939805 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756321c7-681b-4183-b0ef-4afab35a28ae-logs\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.939875 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nslq\" (UniqueName: \"kubernetes.io/projected/756321c7-681b-4183-b0ef-4afab35a28ae-kube-api-access-6nslq\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.939933 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.939974 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-scripts\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.939993 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-public-tls-certs\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:06 crc kubenswrapper[4717]: I0218 12:08:06.940052 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/756321c7-681b-4183-b0ef-4afab35a28ae-etc-machine-id\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.041856 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.041931 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-scripts\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.041952 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-public-tls-certs\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.042008 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/756321c7-681b-4183-b0ef-4afab35a28ae-etc-machine-id\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.042038 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.042066 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-config-data\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.042092 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-config-data-custom\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.042110 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756321c7-681b-4183-b0ef-4afab35a28ae-logs\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.042171 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nslq\" (UniqueName: \"kubernetes.io/projected/756321c7-681b-4183-b0ef-4afab35a28ae-kube-api-access-6nslq\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.042780 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/756321c7-681b-4183-b0ef-4afab35a28ae-etc-machine-id\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.043397 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756321c7-681b-4183-b0ef-4afab35a28ae-logs\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.050899 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-public-tls-certs\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.062953 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.065993 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-scripts\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.066638 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.069271 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a718c21d-5293-46fb-ba9b-777285ff3fb8" path="/var/lib/kubelet/pods/a718c21d-5293-46fb-ba9b-777285ff3fb8/volumes" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.070222 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-config-data-custom\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.070507 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756321c7-681b-4183-b0ef-4afab35a28ae-config-data\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.075823 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nslq\" (UniqueName: \"kubernetes.io/projected/756321c7-681b-4183-b0ef-4afab35a28ae-kube-api-access-6nslq\") pod \"cinder-api-0\" (UID: \"756321c7-681b-4183-b0ef-4afab35a28ae\") " pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.213450 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.672311 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365febb7-ab5e-41fd-ad78-e6209e35653e","Type":"ContainerStarted","Data":"5eff0b07a7121912c23ed3875ae3f77cec26fc44811d7c528e40b0c1997af65d"} Feb 18 12:08:07 crc kubenswrapper[4717]: I0218 12:08:07.723680 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 12:08:08 crc kubenswrapper[4717]: I0218 12:08:08.700063 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"756321c7-681b-4183-b0ef-4afab35a28ae","Type":"ContainerStarted","Data":"8616c50afe0aabc43705b825b5e9bd580b3d6ab23f2ad3804d5c537eb9ab0acb"} Feb 18 12:08:08 crc kubenswrapper[4717]: I0218 12:08:08.700927 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"756321c7-681b-4183-b0ef-4afab35a28ae","Type":"ContainerStarted","Data":"0cd0fb5ef0f141bbcc7b2e38759bd2cee4a4e59a3758b227dd24fbe82f22c1a6"} Feb 18 12:08:08 crc kubenswrapper[4717]: I0218 12:08:08.706471 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365febb7-ab5e-41fd-ad78-e6209e35653e","Type":"ContainerStarted","Data":"820d6da3c878b9ba56840bb1eec2fc47ed8e071e39358826981f678e56200250"} Feb 18 12:08:08 crc kubenswrapper[4717]: I0218 12:08:08.727203 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-d949b4564-9ns6m" Feb 18 12:08:08 crc kubenswrapper[4717]: I0218 12:08:08.827115 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b5f4c76fb-t68w8"] Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.141662 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.323557 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.400875 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4sqcz"] Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.401236 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" podUID="846dd5ee-5395-4934-8d9f-c39bbc189e49" containerName="dnsmasq-dns" containerID="cri-o://5a4f631ba1a0dd452140138004513cb5ff0aece5933b785161a45bce7c744729" gracePeriod=10 Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.475714 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.738172 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"756321c7-681b-4183-b0ef-4afab35a28ae","Type":"ContainerStarted","Data":"0793bb6d7641533a008263ef55acb20fb733399f1cae946c3b251ba14d3973d5"} Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.738864 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.749870 4717 generic.go:334] "Generic (PLEG): container finished" podID="846dd5ee-5395-4934-8d9f-c39bbc189e49" containerID="5a4f631ba1a0dd452140138004513cb5ff0aece5933b785161a45bce7c744729" exitCode=0 Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.750962 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" event={"ID":"846dd5ee-5395-4934-8d9f-c39bbc189e49","Type":"ContainerDied","Data":"5a4f631ba1a0dd452140138004513cb5ff0aece5933b785161a45bce7c744729"} Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.772596 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.772562896 podStartE2EDuration="3.772562896s" podCreationTimestamp="2026-02-18 12:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:09.769939009 +0000 UTC m=+1124.172040345" watchObservedRunningTime="2026-02-18 12:08:09.772562896 +0000 UTC m=+1124.174664212" Feb 18 12:08:09 crc kubenswrapper[4717]: I0218 12:08:09.824381 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.100158 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.135808 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-nb\") pod \"846dd5ee-5395-4934-8d9f-c39bbc189e49\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.135881 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q9gp\" (UniqueName: \"kubernetes.io/projected/846dd5ee-5395-4934-8d9f-c39bbc189e49-kube-api-access-7q9gp\") pod \"846dd5ee-5395-4934-8d9f-c39bbc189e49\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.135929 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-sb\") pod \"846dd5ee-5395-4934-8d9f-c39bbc189e49\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.135962 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-swift-storage-0\") pod \"846dd5ee-5395-4934-8d9f-c39bbc189e49\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.135994 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-svc\") pod \"846dd5ee-5395-4934-8d9f-c39bbc189e49\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.136231 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-config\") pod \"846dd5ee-5395-4934-8d9f-c39bbc189e49\" (UID: \"846dd5ee-5395-4934-8d9f-c39bbc189e49\") " Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.145626 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846dd5ee-5395-4934-8d9f-c39bbc189e49-kube-api-access-7q9gp" (OuterVolumeSpecName: "kube-api-access-7q9gp") pod "846dd5ee-5395-4934-8d9f-c39bbc189e49" (UID: "846dd5ee-5395-4934-8d9f-c39bbc189e49"). InnerVolumeSpecName "kube-api-access-7q9gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.214302 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-config" (OuterVolumeSpecName: "config") pod "846dd5ee-5395-4934-8d9f-c39bbc189e49" (UID: "846dd5ee-5395-4934-8d9f-c39bbc189e49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.239208 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.239269 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q9gp\" (UniqueName: \"kubernetes.io/projected/846dd5ee-5395-4934-8d9f-c39bbc189e49-kube-api-access-7q9gp\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.239926 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "846dd5ee-5395-4934-8d9f-c39bbc189e49" (UID: "846dd5ee-5395-4934-8d9f-c39bbc189e49"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.280868 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "846dd5ee-5395-4934-8d9f-c39bbc189e49" (UID: "846dd5ee-5395-4934-8d9f-c39bbc189e49"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.281594 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "846dd5ee-5395-4934-8d9f-c39bbc189e49" (UID: "846dd5ee-5395-4934-8d9f-c39bbc189e49"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.298879 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "846dd5ee-5395-4934-8d9f-c39bbc189e49" (UID: "846dd5ee-5395-4934-8d9f-c39bbc189e49"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.341825 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.341892 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.341908 4717 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.341926 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/846dd5ee-5395-4934-8d9f-c39bbc189e49-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.762890 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365febb7-ab5e-41fd-ad78-e6209e35653e","Type":"ContainerStarted","Data":"8f2107431594a373c859cc33bc39621351c20b6ce045ef1c9f09d77a9e63fa8b"} Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.764303 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.768480 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" event={"ID":"846dd5ee-5395-4934-8d9f-c39bbc189e49","Type":"ContainerDied","Data":"13d186c81685bbd04e1a40d4491068caf25e9211172e6cb64fe7ef5284398aec"} Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.768558 4717 scope.go:117] "RemoveContainer" containerID="5a4f631ba1a0dd452140138004513cb5ff0aece5933b785161a45bce7c744729" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.768719 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-4sqcz" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.769015 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5971f764-400d-4e52-8d75-53bd7384648b" containerName="cinder-scheduler" containerID="cri-o://81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba" gracePeriod=30 Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.769055 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5971f764-400d-4e52-8d75-53bd7384648b" containerName="probe" containerID="cri-o://4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2" gracePeriod=30 Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.801186 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.292088061 podStartE2EDuration="7.801153239s" podCreationTimestamp="2026-02-18 12:08:03 +0000 UTC" firstStartedPulling="2026-02-18 12:08:04.949039132 +0000 UTC m=+1119.351140458" lastFinishedPulling="2026-02-18 12:08:09.45810432 +0000 UTC m=+1123.860205636" observedRunningTime="2026-02-18 12:08:10.788584004 +0000 UTC m=+1125.190685330" watchObservedRunningTime="2026-02-18 12:08:10.801153239 +0000 UTC m=+1125.203254555" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.816566 4717 scope.go:117] "RemoveContainer" containerID="50f0439b488ea5973ca1ddc915bdf1647790ee6446f86acfd0062f62a15f4f27" Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.837142 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4sqcz"] Feb 18 12:08:10 crc kubenswrapper[4717]: I0218 12:08:10.848453 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-4sqcz"] Feb 18 12:08:11 crc kubenswrapper[4717]: I0218 12:08:11.106956 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="846dd5ee-5395-4934-8d9f-c39bbc189e49" path="/var/lib/kubelet/pods/846dd5ee-5395-4934-8d9f-c39bbc189e49/volumes" Feb 18 12:08:12 crc kubenswrapper[4717]: I0218 12:08:12.804346 4717 generic.go:334] "Generic (PLEG): container finished" podID="5971f764-400d-4e52-8d75-53bd7384648b" containerID="4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2" exitCode=0 Feb 18 12:08:12 crc kubenswrapper[4717]: I0218 12:08:12.804436 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5971f764-400d-4e52-8d75-53bd7384648b","Type":"ContainerDied","Data":"4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2"} Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.338305 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-766b8ffffc-2xlnb" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.389848 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.525241 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq8kh\" (UniqueName: \"kubernetes.io/projected/5971f764-400d-4e52-8d75-53bd7384648b-kube-api-access-lq8kh\") pod \"5971f764-400d-4e52-8d75-53bd7384648b\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.525362 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5971f764-400d-4e52-8d75-53bd7384648b-etc-machine-id\") pod \"5971f764-400d-4e52-8d75-53bd7384648b\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.525423 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-scripts\") pod \"5971f764-400d-4e52-8d75-53bd7384648b\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.525512 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-combined-ca-bundle\") pod \"5971f764-400d-4e52-8d75-53bd7384648b\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.525556 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data-custom\") pod \"5971f764-400d-4e52-8d75-53bd7384648b\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.525581 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data\") pod \"5971f764-400d-4e52-8d75-53bd7384648b\" (UID: \"5971f764-400d-4e52-8d75-53bd7384648b\") " Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.525578 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5971f764-400d-4e52-8d75-53bd7384648b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5971f764-400d-4e52-8d75-53bd7384648b" (UID: "5971f764-400d-4e52-8d75-53bd7384648b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.526296 4717 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5971f764-400d-4e52-8d75-53bd7384648b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.535092 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-scripts" (OuterVolumeSpecName: "scripts") pod "5971f764-400d-4e52-8d75-53bd7384648b" (UID: "5971f764-400d-4e52-8d75-53bd7384648b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.537179 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5971f764-400d-4e52-8d75-53bd7384648b-kube-api-access-lq8kh" (OuterVolumeSpecName: "kube-api-access-lq8kh") pod "5971f764-400d-4e52-8d75-53bd7384648b" (UID: "5971f764-400d-4e52-8d75-53bd7384648b"). InnerVolumeSpecName "kube-api-access-lq8kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.543587 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5971f764-400d-4e52-8d75-53bd7384648b" (UID: "5971f764-400d-4e52-8d75-53bd7384648b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.620449 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5971f764-400d-4e52-8d75-53bd7384648b" (UID: "5971f764-400d-4e52-8d75-53bd7384648b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.629113 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq8kh\" (UniqueName: \"kubernetes.io/projected/5971f764-400d-4e52-8d75-53bd7384648b-kube-api-access-lq8kh\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.629158 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.629171 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.629182 4717 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.667456 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data" (OuterVolumeSpecName: "config-data") pod "5971f764-400d-4e52-8d75-53bd7384648b" (UID: "5971f764-400d-4e52-8d75-53bd7384648b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.681021 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.681844 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.731450 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5971f764-400d-4e52-8d75-53bd7384648b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.837546 4717 generic.go:334] "Generic (PLEG): container finished" podID="5971f764-400d-4e52-8d75-53bd7384648b" containerID="81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba" exitCode=0 Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.838710 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.839594 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5971f764-400d-4e52-8d75-53bd7384648b","Type":"ContainerDied","Data":"81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba"} Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.839633 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5971f764-400d-4e52-8d75-53bd7384648b","Type":"ContainerDied","Data":"d1e0b2148f0d54de5e6fa366b5156f9ef7fdfa10f9f706479c5e78fb8eb714c4"} Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.839658 4717 scope.go:117] "RemoveContainer" containerID="4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.877602 4717 scope.go:117] "RemoveContainer" containerID="81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.890360 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.910186 4717 scope.go:117] "RemoveContainer" containerID="4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.916399 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 12:08:13 crc kubenswrapper[4717]: E0218 12:08:13.920880 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2\": container with ID starting with 4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2 not found: ID does not exist" containerID="4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.920952 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2"} err="failed to get container status \"4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2\": rpc error: code = NotFound desc = could not find container \"4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2\": container with ID starting with 4f64b942170ca3e6f2f0aabe73122a149d1f15118c9d36fb8b2139e41e0004f2 not found: ID does not exist" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.920993 4717 scope.go:117] "RemoveContainer" containerID="81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba" Feb 18 12:08:13 crc kubenswrapper[4717]: E0218 12:08:13.921723 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba\": container with ID starting with 81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba not found: ID does not exist" containerID="81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.921754 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba"} err="failed to get container status \"81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba\": rpc error: code = NotFound desc = could not find container \"81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba\": container with ID starting with 81f6f3a3949dd3a636b5d82eba16e07fe25c1d664785e9a9777c16a8b7d33dba not found: ID does not exist" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.947671 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 12:08:13 crc kubenswrapper[4717]: E0218 12:08:13.948179 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5971f764-400d-4e52-8d75-53bd7384648b" containerName="probe" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.948193 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="5971f764-400d-4e52-8d75-53bd7384648b" containerName="probe" Feb 18 12:08:13 crc kubenswrapper[4717]: E0218 12:08:13.948230 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5971f764-400d-4e52-8d75-53bd7384648b" containerName="cinder-scheduler" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.948236 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="5971f764-400d-4e52-8d75-53bd7384648b" containerName="cinder-scheduler" Feb 18 12:08:13 crc kubenswrapper[4717]: E0218 12:08:13.948245 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846dd5ee-5395-4934-8d9f-c39bbc189e49" containerName="dnsmasq-dns" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.948251 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="846dd5ee-5395-4934-8d9f-c39bbc189e49" containerName="dnsmasq-dns" Feb 18 12:08:13 crc kubenswrapper[4717]: E0218 12:08:13.950120 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846dd5ee-5395-4934-8d9f-c39bbc189e49" containerName="init" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.950147 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="846dd5ee-5395-4934-8d9f-c39bbc189e49" containerName="init" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.950521 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="846dd5ee-5395-4934-8d9f-c39bbc189e49" containerName="dnsmasq-dns" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.950552 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="5971f764-400d-4e52-8d75-53bd7384648b" containerName="probe" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.950570 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="5971f764-400d-4e52-8d75-53bd7384648b" containerName="cinder-scheduler" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.951725 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.958429 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 12:08:13 crc kubenswrapper[4717]: I0218 12:08:13.963505 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.040147 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9t5g\" (UniqueName: \"kubernetes.io/projected/96f31374-cbb7-49a4-878b-5667b60b960e-kube-api-access-l9t5g\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.040294 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.040331 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-config-data\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.040387 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.040423 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96f31374-cbb7-49a4-878b-5667b60b960e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.040449 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-scripts\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.078897 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-765f565894-lj9d4"] Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.086391 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.095366 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-765f565894-lj9d4"] Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.149560 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggfd7\" (UniqueName: \"kubernetes.io/projected/94b4824f-c89e-4740-ab96-6a36d2f7abb7-kube-api-access-ggfd7\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.149653 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-combined-ca-bundle\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.149715 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.151862 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96f31374-cbb7-49a4-878b-5667b60b960e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.152053 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94b4824f-c89e-4740-ab96-6a36d2f7abb7-logs\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.152105 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-scripts\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.152221 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-internal-tls-certs\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.152431 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9t5g\" (UniqueName: \"kubernetes.io/projected/96f31374-cbb7-49a4-878b-5667b60b960e-kube-api-access-l9t5g\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.153518 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/96f31374-cbb7-49a4-878b-5667b60b960e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.154393 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-config-data\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.154495 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.154556 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-public-tls-certs\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.154764 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-config-data\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.154935 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-scripts\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.164172 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.164956 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.171857 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-config-data\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.174123 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96f31374-cbb7-49a4-878b-5667b60b960e-scripts\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.179149 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9t5g\" (UniqueName: \"kubernetes.io/projected/96f31374-cbb7-49a4-878b-5667b60b960e-kube-api-access-l9t5g\") pod \"cinder-scheduler-0\" (UID: \"96f31374-cbb7-49a4-878b-5667b60b960e\") " pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.262199 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-config-data\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.262304 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-public-tls-certs\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.262358 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-scripts\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.262402 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggfd7\" (UniqueName: \"kubernetes.io/projected/94b4824f-c89e-4740-ab96-6a36d2f7abb7-kube-api-access-ggfd7\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.262433 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-combined-ca-bundle\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.262493 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94b4824f-c89e-4740-ab96-6a36d2f7abb7-logs\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.262523 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-internal-tls-certs\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.265358 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94b4824f-c89e-4740-ab96-6a36d2f7abb7-logs\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.270170 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-internal-tls-certs\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.270878 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-scripts\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.270898 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-combined-ca-bundle\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.271087 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-config-data\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.274683 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b4824f-c89e-4740-ab96-6a36d2f7abb7-public-tls-certs\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.295724 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.297045 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggfd7\" (UniqueName: \"kubernetes.io/projected/94b4824f-c89e-4740-ab96-6a36d2f7abb7-kube-api-access-ggfd7\") pod \"placement-765f565894-lj9d4\" (UID: \"94b4824f-c89e-4740-ab96-6a36d2f7abb7\") " pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.415497 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:14 crc kubenswrapper[4717]: W0218 12:08:14.890651 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96f31374_cbb7_49a4_878b_5667b60b960e.slice/crio-b05bd58ab894d8696b3f8f99cdcd5287633a31649c097a35295c7ee02fdc2e80 WatchSource:0}: Error finding container b05bd58ab894d8696b3f8f99cdcd5287633a31649c097a35295c7ee02fdc2e80: Status 404 returned error can't find the container with id b05bd58ab894d8696b3f8f99cdcd5287633a31649c097a35295c7ee02fdc2e80 Feb 18 12:08:14 crc kubenswrapper[4717]: I0218 12:08:14.905039 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 12:08:15 crc kubenswrapper[4717]: I0218 12:08:15.090905 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5971f764-400d-4e52-8d75-53bd7384648b" path="/var/lib/kubelet/pods/5971f764-400d-4e52-8d75-53bd7384648b/volumes" Feb 18 12:08:15 crc kubenswrapper[4717]: I0218 12:08:15.092771 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-765f565894-lj9d4"] Feb 18 12:08:15 crc kubenswrapper[4717]: I0218 12:08:15.870187 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96f31374-cbb7-49a4-878b-5667b60b960e","Type":"ContainerStarted","Data":"e8a0e2f5caad2254edfabdeb9d651e39db914e47ffe45754804bcedce91172f2"} Feb 18 12:08:15 crc kubenswrapper[4717]: I0218 12:08:15.870619 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96f31374-cbb7-49a4-878b-5667b60b960e","Type":"ContainerStarted","Data":"b05bd58ab894d8696b3f8f99cdcd5287633a31649c097a35295c7ee02fdc2e80"} Feb 18 12:08:15 crc kubenswrapper[4717]: I0218 12:08:15.872694 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-765f565894-lj9d4" event={"ID":"94b4824f-c89e-4740-ab96-6a36d2f7abb7","Type":"ContainerStarted","Data":"d74b5f4dba0b5244b895a3f9172057523fa4030c400819a54a81546beffd5fcc"} Feb 18 12:08:15 crc kubenswrapper[4717]: I0218 12:08:15.872725 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-765f565894-lj9d4" event={"ID":"94b4824f-c89e-4740-ab96-6a36d2f7abb7","Type":"ContainerStarted","Data":"522a00da07642c476cf5e1454b32ca8c144c041f1f95502e66608d5a7514c15a"} Feb 18 12:08:15 crc kubenswrapper[4717]: I0218 12:08:15.872740 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-765f565894-lj9d4" event={"ID":"94b4824f-c89e-4740-ab96-6a36d2f7abb7","Type":"ContainerStarted","Data":"8643d4f93a23fd954fedb105315332d035bd5745aa25ef92dcbc3020417869fd"} Feb 18 12:08:15 crc kubenswrapper[4717]: I0218 12:08:15.873079 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:15 crc kubenswrapper[4717]: I0218 12:08:15.873223 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:15 crc kubenswrapper[4717]: I0218 12:08:15.913213 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-765f565894-lj9d4" podStartSLOduration=1.913183595 podStartE2EDuration="1.913183595s" podCreationTimestamp="2026-02-18 12:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:15.909143018 +0000 UTC m=+1130.311244334" watchObservedRunningTime="2026-02-18 12:08:15.913183595 +0000 UTC m=+1130.315284911" Feb 18 12:08:16 crc kubenswrapper[4717]: I0218 12:08:16.894057 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"96f31374-cbb7-49a4-878b-5667b60b960e","Type":"ContainerStarted","Data":"c55b00e6d73c26b93c033ea641a4fb091898c32c6e9a5ca971ad7cc5960bee62"} Feb 18 12:08:16 crc kubenswrapper[4717]: I0218 12:08:16.926438 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.926414181 podStartE2EDuration="3.926414181s" podCreationTimestamp="2026-02-18 12:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:16.918971945 +0000 UTC m=+1131.321073271" watchObservedRunningTime="2026-02-18 12:08:16.926414181 +0000 UTC m=+1131.328515497" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.704252 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.706867 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.711218 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-c6tvm" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.711489 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.716421 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.723872 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.764562 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.764761 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config-secret\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.764849 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vz2v\" (UniqueName: \"kubernetes.io/projected/12b250cd-4639-44fa-bc2f-d49dfa3058e4-kube-api-access-9vz2v\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.764876 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.866901 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config-secret\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.867035 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vz2v\" (UniqueName: \"kubernetes.io/projected/12b250cd-4639-44fa-bc2f-d49dfa3058e4-kube-api-access-9vz2v\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.867075 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.867119 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.868225 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.879602 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.881691 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config-secret\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.886621 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vz2v\" (UniqueName: \"kubernetes.io/projected/12b250cd-4639-44fa-bc2f-d49dfa3058e4-kube-api-access-9vz2v\") pod \"openstackclient\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.971468 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.972217 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 12:08:17 crc kubenswrapper[4717]: I0218 12:08:17.984624 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.039527 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.041297 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.060475 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.175975 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b4ef341-6659-4283-81b4-78674dfd9fc8-openstack-config-secret\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: E0218 12:08:18.176114 4717 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 18 12:08:18 crc kubenswrapper[4717]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_12b250cd-4639-44fa-bc2f-d49dfa3058e4_0(558cba1e834a850b74b072c8fe78cfcde07ef6c0ddee3fe9409ca85c9ab98e58): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"558cba1e834a850b74b072c8fe78cfcde07ef6c0ddee3fe9409ca85c9ab98e58" Netns:"/var/run/netns/6c479d8e-9c8f-4284-b50b-09dc4c136367" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=558cba1e834a850b74b072c8fe78cfcde07ef6c0ddee3fe9409ca85c9ab98e58;K8S_POD_UID=12b250cd-4639-44fa-bc2f-d49dfa3058e4" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/12b250cd-4639-44fa-bc2f-d49dfa3058e4]: expected pod UID "12b250cd-4639-44fa-bc2f-d49dfa3058e4" but got "9b4ef341-6659-4283-81b4-78674dfd9fc8" from Kube API Feb 18 12:08:18 crc kubenswrapper[4717]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 12:08:18 crc kubenswrapper[4717]: > Feb 18 12:08:18 crc kubenswrapper[4717]: E0218 12:08:18.176177 4717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 18 12:08:18 crc kubenswrapper[4717]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_12b250cd-4639-44fa-bc2f-d49dfa3058e4_0(558cba1e834a850b74b072c8fe78cfcde07ef6c0ddee3fe9409ca85c9ab98e58): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"558cba1e834a850b74b072c8fe78cfcde07ef6c0ddee3fe9409ca85c9ab98e58" Netns:"/var/run/netns/6c479d8e-9c8f-4284-b50b-09dc4c136367" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=558cba1e834a850b74b072c8fe78cfcde07ef6c0ddee3fe9409ca85c9ab98e58;K8S_POD_UID=12b250cd-4639-44fa-bc2f-d49dfa3058e4" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/12b250cd-4639-44fa-bc2f-d49dfa3058e4]: expected pod UID "12b250cd-4639-44fa-bc2f-d49dfa3058e4" but got "9b4ef341-6659-4283-81b4-78674dfd9fc8" from Kube API Feb 18 12:08:18 crc kubenswrapper[4717]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 12:08:18 crc kubenswrapper[4717]: > pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.176531 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b4ef341-6659-4283-81b4-78674dfd9fc8-openstack-config\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.176713 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4ef341-6659-4283-81b4-78674dfd9fc8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.176785 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvhjt\" (UniqueName: \"kubernetes.io/projected/9b4ef341-6659-4283-81b4-78674dfd9fc8-kube-api-access-bvhjt\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.278729 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b4ef341-6659-4283-81b4-78674dfd9fc8-openstack-config-secret\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.279007 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b4ef341-6659-4283-81b4-78674dfd9fc8-openstack-config\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.279111 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4ef341-6659-4283-81b4-78674dfd9fc8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.279173 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvhjt\" (UniqueName: \"kubernetes.io/projected/9b4ef341-6659-4283-81b4-78674dfd9fc8-kube-api-access-bvhjt\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.280339 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9b4ef341-6659-4283-81b4-78674dfd9fc8-openstack-config\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.285146 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4ef341-6659-4283-81b4-78674dfd9fc8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.285377 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9b4ef341-6659-4283-81b4-78674dfd9fc8-openstack-config-secret\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.300870 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvhjt\" (UniqueName: \"kubernetes.io/projected/9b4ef341-6659-4283-81b4-78674dfd9fc8-kube-api-access-bvhjt\") pod \"openstackclient\" (UID: \"9b4ef341-6659-4283-81b4-78674dfd9fc8\") " pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.428750 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.912778 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.918575 4717 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="12b250cd-4639-44fa-bc2f-d49dfa3058e4" podUID="9b4ef341-6659-4283-81b4-78674dfd9fc8" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.961619 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 12:08:18 crc kubenswrapper[4717]: I0218 12:08:18.992181 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 12:08:19 crc kubenswrapper[4717]: W0218 12:08:19.008036 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b4ef341_6659_4283_81b4_78674dfd9fc8.slice/crio-63a566cd5baa8f9e68a637e8723d4ad037b3e769c1a6dba9b1e2de88154d58bb WatchSource:0}: Error finding container 63a566cd5baa8f9e68a637e8723d4ad037b3e769c1a6dba9b1e2de88154d58bb: Status 404 returned error can't find the container with id 63a566cd5baa8f9e68a637e8723d4ad037b3e769c1a6dba9b1e2de88154d58bb Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.071944 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-586778dd75-mtms6"] Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.079233 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.093529 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-586778dd75-mtms6"] Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.095039 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.095346 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.095461 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.126300 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-combined-ca-bundle\") pod \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.126465 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vz2v\" (UniqueName: \"kubernetes.io/projected/12b250cd-4639-44fa-bc2f-d49dfa3058e4-kube-api-access-9vz2v\") pod \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.126674 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config\") pod \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.126842 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config-secret\") pod \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\" (UID: \"12b250cd-4639-44fa-bc2f-d49dfa3058e4\") " Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.138155 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "12b250cd-4639-44fa-bc2f-d49dfa3058e4" (UID: "12b250cd-4639-44fa-bc2f-d49dfa3058e4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.145882 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "12b250cd-4639-44fa-bc2f-d49dfa3058e4" (UID: "12b250cd-4639-44fa-bc2f-d49dfa3058e4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.148318 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12b250cd-4639-44fa-bc2f-d49dfa3058e4" (UID: "12b250cd-4639-44fa-bc2f-d49dfa3058e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.149048 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b250cd-4639-44fa-bc2f-d49dfa3058e4-kube-api-access-9vz2v" (OuterVolumeSpecName: "kube-api-access-9vz2v") pod "12b250cd-4639-44fa-bc2f-d49dfa3058e4" (UID: "12b250cd-4639-44fa-bc2f-d49dfa3058e4"). InnerVolumeSpecName "kube-api-access-9vz2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.230814 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwtf\" (UniqueName: \"kubernetes.io/projected/e21881f2-73fb-4d0f-974c-a74694a2b301-kube-api-access-sjwtf\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.230940 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-config-data\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.230979 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-internal-tls-certs\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.231018 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e21881f2-73fb-4d0f-974c-a74694a2b301-log-httpd\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.231043 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-public-tls-certs\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.231088 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-combined-ca-bundle\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.231120 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e21881f2-73fb-4d0f-974c-a74694a2b301-run-httpd\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.231212 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e21881f2-73fb-4d0f-974c-a74694a2b301-etc-swift\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.231354 4717 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.231366 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b250cd-4639-44fa-bc2f-d49dfa3058e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.231377 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vz2v\" (UniqueName: \"kubernetes.io/projected/12b250cd-4639-44fa-bc2f-d49dfa3058e4-kube-api-access-9vz2v\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.231387 4717 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12b250cd-4639-44fa-bc2f-d49dfa3058e4-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.296464 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.333769 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwtf\" (UniqueName: \"kubernetes.io/projected/e21881f2-73fb-4d0f-974c-a74694a2b301-kube-api-access-sjwtf\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.333869 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-config-data\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.333934 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-internal-tls-certs\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.333970 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e21881f2-73fb-4d0f-974c-a74694a2b301-log-httpd\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.333990 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-public-tls-certs\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.334035 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-combined-ca-bundle\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.334064 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e21881f2-73fb-4d0f-974c-a74694a2b301-run-httpd\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.334143 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e21881f2-73fb-4d0f-974c-a74694a2b301-etc-swift\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.335128 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e21881f2-73fb-4d0f-974c-a74694a2b301-log-httpd\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.335966 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e21881f2-73fb-4d0f-974c-a74694a2b301-run-httpd\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.340226 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-internal-tls-certs\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.340284 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-config-data\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.342645 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e21881f2-73fb-4d0f-974c-a74694a2b301-etc-swift\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.347764 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-combined-ca-bundle\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.348012 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e21881f2-73fb-4d0f-974c-a74694a2b301-public-tls-certs\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.364176 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwtf\" (UniqueName: \"kubernetes.io/projected/e21881f2-73fb-4d0f-974c-a74694a2b301-kube-api-access-sjwtf\") pod \"swift-proxy-586778dd75-mtms6\" (UID: \"e21881f2-73fb-4d0f-974c-a74694a2b301\") " pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.409563 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.567752 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.568517 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="ceilometer-central-agent" containerID="cri-o://cdd3fc40499b9d917654397de25d20ab1cd4ae675735a4b0b9468346d147399e" gracePeriod=30 Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.569536 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="proxy-httpd" containerID="cri-o://8f2107431594a373c859cc33bc39621351c20b6ce045ef1c9f09d77a9e63fa8b" gracePeriod=30 Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.569590 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="sg-core" containerID="cri-o://820d6da3c878b9ba56840bb1eec2fc47ed8e071e39358826981f678e56200250" gracePeriod=30 Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.569630 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="ceilometer-notification-agent" containerID="cri-o://5eff0b07a7121912c23ed3875ae3f77cec26fc44811d7c528e40b0c1997af65d" gracePeriod=30 Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.605770 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": EOF" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.642702 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.645973 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.949392 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9b4ef341-6659-4283-81b4-78674dfd9fc8","Type":"ContainerStarted","Data":"63a566cd5baa8f9e68a637e8723d4ad037b3e769c1a6dba9b1e2de88154d58bb"} Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.955711 4717 generic.go:334] "Generic (PLEG): container finished" podID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerID="820d6da3c878b9ba56840bb1eec2fc47ed8e071e39358826981f678e56200250" exitCode=2 Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.955781 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365febb7-ab5e-41fd-ad78-e6209e35653e","Type":"ContainerDied","Data":"820d6da3c878b9ba56840bb1eec2fc47ed8e071e39358826981f678e56200250"} Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.955841 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 12:08:19 crc kubenswrapper[4717]: I0218 12:08:19.975557 4717 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="12b250cd-4639-44fa-bc2f-d49dfa3058e4" podUID="9b4ef341-6659-4283-81b4-78674dfd9fc8" Feb 18 12:08:20 crc kubenswrapper[4717]: I0218 12:08:20.092189 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-586778dd75-mtms6"] Feb 18 12:08:20 crc kubenswrapper[4717]: I0218 12:08:20.974487 4717 generic.go:334] "Generic (PLEG): container finished" podID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerID="8f2107431594a373c859cc33bc39621351c20b6ce045ef1c9f09d77a9e63fa8b" exitCode=0 Feb 18 12:08:20 crc kubenswrapper[4717]: I0218 12:08:20.974909 4717 generic.go:334] "Generic (PLEG): container finished" podID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerID="cdd3fc40499b9d917654397de25d20ab1cd4ae675735a4b0b9468346d147399e" exitCode=0 Feb 18 12:08:20 crc kubenswrapper[4717]: I0218 12:08:20.974703 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365febb7-ab5e-41fd-ad78-e6209e35653e","Type":"ContainerDied","Data":"8f2107431594a373c859cc33bc39621351c20b6ce045ef1c9f09d77a9e63fa8b"} Feb 18 12:08:20 crc kubenswrapper[4717]: I0218 12:08:20.975008 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365febb7-ab5e-41fd-ad78-e6209e35653e","Type":"ContainerDied","Data":"cdd3fc40499b9d917654397de25d20ab1cd4ae675735a4b0b9468346d147399e"} Feb 18 12:08:20 crc kubenswrapper[4717]: I0218 12:08:20.977068 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-586778dd75-mtms6" event={"ID":"e21881f2-73fb-4d0f-974c-a74694a2b301","Type":"ContainerStarted","Data":"6207a70c1979d804c9dbe399148fdd5762489b8ca323497e7e623cef83f787db"} Feb 18 12:08:20 crc kubenswrapper[4717]: I0218 12:08:20.977096 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-586778dd75-mtms6" event={"ID":"e21881f2-73fb-4d0f-974c-a74694a2b301","Type":"ContainerStarted","Data":"2f0cee44a5c1460ee34969e531134aa74e789320dfa01b155d110585eaf7bc99"} Feb 18 12:08:20 crc kubenswrapper[4717]: I0218 12:08:20.977108 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-586778dd75-mtms6" event={"ID":"e21881f2-73fb-4d0f-974c-a74694a2b301","Type":"ContainerStarted","Data":"7de3e9443c2d7d50b305fb4b8f27c80048ab0de06e61cfdfa1a8742e8de6139c"} Feb 18 12:08:20 crc kubenswrapper[4717]: I0218 12:08:20.977449 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:20 crc kubenswrapper[4717]: I0218 12:08:20.977487 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:21 crc kubenswrapper[4717]: I0218 12:08:21.010976 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-586778dd75-mtms6" podStartSLOduration=2.010939636 podStartE2EDuration="2.010939636s" podCreationTimestamp="2026-02-18 12:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:21.007123166 +0000 UTC m=+1135.409224482" watchObservedRunningTime="2026-02-18 12:08:21.010939636 +0000 UTC m=+1135.413040952" Feb 18 12:08:21 crc kubenswrapper[4717]: I0218 12:08:21.056130 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b250cd-4639-44fa-bc2f-d49dfa3058e4" path="/var/lib/kubelet/pods/12b250cd-4639-44fa-bc2f-d49dfa3058e4/volumes" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.017623 4717 generic.go:334] "Generic (PLEG): container finished" podID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerID="5eff0b07a7121912c23ed3875ae3f77cec26fc44811d7c528e40b0c1997af65d" exitCode=0 Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.017716 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365febb7-ab5e-41fd-ad78-e6209e35653e","Type":"ContainerDied","Data":"5eff0b07a7121912c23ed3875ae3f77cec26fc44811d7c528e40b0c1997af65d"} Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.018141 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365febb7-ab5e-41fd-ad78-e6209e35653e","Type":"ContainerDied","Data":"6b95d8e042e19cef7c10a8c0120a6049a0647f4374d08de14357652f0ad6e30e"} Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.018157 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b95d8e042e19cef7c10a8c0120a6049a0647f4374d08de14357652f0ad6e30e" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.019633 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.142086 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgt8s\" (UniqueName: \"kubernetes.io/projected/365febb7-ab5e-41fd-ad78-e6209e35653e-kube-api-access-rgt8s\") pod \"365febb7-ab5e-41fd-ad78-e6209e35653e\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.142175 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-log-httpd\") pod \"365febb7-ab5e-41fd-ad78-e6209e35653e\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.142334 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-combined-ca-bundle\") pod \"365febb7-ab5e-41fd-ad78-e6209e35653e\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.142402 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-sg-core-conf-yaml\") pod \"365febb7-ab5e-41fd-ad78-e6209e35653e\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.142471 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-run-httpd\") pod \"365febb7-ab5e-41fd-ad78-e6209e35653e\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.142546 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-scripts\") pod \"365febb7-ab5e-41fd-ad78-e6209e35653e\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.142608 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-config-data\") pod \"365febb7-ab5e-41fd-ad78-e6209e35653e\" (UID: \"365febb7-ab5e-41fd-ad78-e6209e35653e\") " Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.157381 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "365febb7-ab5e-41fd-ad78-e6209e35653e" (UID: "365febb7-ab5e-41fd-ad78-e6209e35653e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.167440 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "365febb7-ab5e-41fd-ad78-e6209e35653e" (UID: "365febb7-ab5e-41fd-ad78-e6209e35653e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.177831 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-scripts" (OuterVolumeSpecName: "scripts") pod "365febb7-ab5e-41fd-ad78-e6209e35653e" (UID: "365febb7-ab5e-41fd-ad78-e6209e35653e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.179522 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365febb7-ab5e-41fd-ad78-e6209e35653e-kube-api-access-rgt8s" (OuterVolumeSpecName: "kube-api-access-rgt8s") pod "365febb7-ab5e-41fd-ad78-e6209e35653e" (UID: "365febb7-ab5e-41fd-ad78-e6209e35653e"). InnerVolumeSpecName "kube-api-access-rgt8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.245658 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "365febb7-ab5e-41fd-ad78-e6209e35653e" (UID: "365febb7-ab5e-41fd-ad78-e6209e35653e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.247210 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgt8s\" (UniqueName: \"kubernetes.io/projected/365febb7-ab5e-41fd-ad78-e6209e35653e-kube-api-access-rgt8s\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.247272 4717 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.247286 4717 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.247298 4717 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365febb7-ab5e-41fd-ad78-e6209e35653e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.247308 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.329362 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "365febb7-ab5e-41fd-ad78-e6209e35653e" (UID: "365febb7-ab5e-41fd-ad78-e6209e35653e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.342023 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-config-data" (OuterVolumeSpecName: "config-data") pod "365febb7-ab5e-41fd-ad78-e6209e35653e" (UID: "365febb7-ab5e-41fd-ad78-e6209e35653e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.351502 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:22 crc kubenswrapper[4717]: I0218 12:08:22.351586 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365febb7-ab5e-41fd-ad78-e6209e35653e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.028139 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7bf89fd777-rchb4" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.032820 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.109472 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.132906 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.165369 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8584d7b78b-d8rzh"] Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.165704 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8584d7b78b-d8rzh" podUID="46f23333-73d2-4175-85f5-9ceb356a42ad" containerName="neutron-api" containerID="cri-o://f7101619926bdd05c3451459c84628bc5fab3e9a42bf113b23a38797ae940586" gracePeriod=30 Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.166361 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8584d7b78b-d8rzh" podUID="46f23333-73d2-4175-85f5-9ceb356a42ad" containerName="neutron-httpd" containerID="cri-o://8a9d28e4ecbfa886e508d891e0c6307e24cc6e57ed1a981a3dfd0784dac07955" gracePeriod=30 Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.185001 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:23 crc kubenswrapper[4717]: E0218 12:08:23.199451 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="proxy-httpd" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.199504 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="proxy-httpd" Feb 18 12:08:23 crc kubenswrapper[4717]: E0218 12:08:23.199543 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="ceilometer-central-agent" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.199552 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="ceilometer-central-agent" Feb 18 12:08:23 crc kubenswrapper[4717]: E0218 12:08:23.199759 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="sg-core" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.199772 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="sg-core" Feb 18 12:08:23 crc kubenswrapper[4717]: E0218 12:08:23.199790 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="ceilometer-notification-agent" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.199799 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="ceilometer-notification-agent" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.219131 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="ceilometer-central-agent" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.219391 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="ceilometer-notification-agent" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.219549 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="sg-core" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.219648 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" containerName="proxy-httpd" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.264162 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.264558 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.276500 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.281505 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.398162 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.398673 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-log-httpd\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.398869 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kswl\" (UniqueName: \"kubernetes.io/projected/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-kube-api-access-7kswl\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.399969 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-run-httpd\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.400025 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-scripts\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.400204 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-config-data\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.400239 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.504280 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.504710 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-log-httpd\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.504848 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kswl\" (UniqueName: \"kubernetes.io/projected/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-kube-api-access-7kswl\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.505013 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-run-httpd\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.505120 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-scripts\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.505340 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-config-data\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.505470 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.510277 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-run-httpd\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.510571 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-log-httpd\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.518612 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.518814 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-config-data\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.518929 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-scripts\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.520305 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.532240 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kswl\" (UniqueName: \"kubernetes.io/projected/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-kube-api-access-7kswl\") pod \"ceilometer-0\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " pod="openstack/ceilometer-0" Feb 18 12:08:23 crc kubenswrapper[4717]: I0218 12:08:23.665238 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.058168 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8584d7b78b-d8rzh" event={"ID":"46f23333-73d2-4175-85f5-9ceb356a42ad","Type":"ContainerDied","Data":"8a9d28e4ecbfa886e508d891e0c6307e24cc6e57ed1a981a3dfd0784dac07955"} Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.058290 4717 generic.go:334] "Generic (PLEG): container finished" podID="46f23333-73d2-4175-85f5-9ceb356a42ad" containerID="8a9d28e4ecbfa886e508d891e0c6307e24cc6e57ed1a981a3dfd0784dac07955" exitCode=0 Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.065826 4717 generic.go:334] "Generic (PLEG): container finished" podID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" containerID="8e37bf7109799268f1e7fd5e109350e2ca472fc89a6970764c2bae8e20bb8940" exitCode=137 Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.065887 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698f4fc767-8w7f5" event={"ID":"9a262b0a-4d1c-46ba-b281-d95194a8bfa2","Type":"ContainerDied","Data":"8e37bf7109799268f1e7fd5e109350e2ca472fc89a6970764c2bae8e20bb8940"} Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.072500 4717 generic.go:334] "Generic (PLEG): container finished" podID="22aba54a-8df1-4b52-821f-c25b7ff37d18" containerID="2412adada9e5f4150422e207087565feb8dd7c4139158d0fb4219e4afb062d0f" exitCode=137 Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.072566 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" event={"ID":"22aba54a-8df1-4b52-821f-c25b7ff37d18","Type":"ContainerDied","Data":"2412adada9e5f4150422e207087565feb8dd7c4139158d0fb4219e4afb062d0f"} Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.258222 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:24 crc kubenswrapper[4717]: W0218 12:08:24.272470 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceabfb4a_86c0_4bb9_952c_5b7193cc6b79.slice/crio-8b9e8bdb70c0209cbc87b117fd02e82087cb33ef5d19e8eff23837d01ccfbcef WatchSource:0}: Error finding container 8b9e8bdb70c0209cbc87b117fd02e82087cb33ef5d19e8eff23837d01ccfbcef: Status 404 returned error can't find the container with id 8b9e8bdb70c0209cbc87b117fd02e82087cb33ef5d19e8eff23837d01ccfbcef Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.461385 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.468221 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.532831 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data\") pod \"22aba54a-8df1-4b52-821f-c25b7ff37d18\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.532927 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data-custom\") pod \"22aba54a-8df1-4b52-821f-c25b7ff37d18\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.532968 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-combined-ca-bundle\") pod \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.533069 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgl99\" (UniqueName: \"kubernetes.io/projected/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-kube-api-access-dgl99\") pod \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.533103 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtwzr\" (UniqueName: \"kubernetes.io/projected/22aba54a-8df1-4b52-821f-c25b7ff37d18-kube-api-access-vtwzr\") pod \"22aba54a-8df1-4b52-821f-c25b7ff37d18\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.533138 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-logs\") pod \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.533194 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-combined-ca-bundle\") pod \"22aba54a-8df1-4b52-821f-c25b7ff37d18\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.533219 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data\") pod \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.533302 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data-custom\") pod \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\" (UID: \"9a262b0a-4d1c-46ba-b281-d95194a8bfa2\") " Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.533328 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22aba54a-8df1-4b52-821f-c25b7ff37d18-logs\") pod \"22aba54a-8df1-4b52-821f-c25b7ff37d18\" (UID: \"22aba54a-8df1-4b52-821f-c25b7ff37d18\") " Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.534422 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-logs" (OuterVolumeSpecName: "logs") pod "9a262b0a-4d1c-46ba-b281-d95194a8bfa2" (UID: "9a262b0a-4d1c-46ba-b281-d95194a8bfa2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.534467 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22aba54a-8df1-4b52-821f-c25b7ff37d18-logs" (OuterVolumeSpecName: "logs") pod "22aba54a-8df1-4b52-821f-c25b7ff37d18" (UID: "22aba54a-8df1-4b52-821f-c25b7ff37d18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.546877 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22aba54a-8df1-4b52-821f-c25b7ff37d18-kube-api-access-vtwzr" (OuterVolumeSpecName: "kube-api-access-vtwzr") pod "22aba54a-8df1-4b52-821f-c25b7ff37d18" (UID: "22aba54a-8df1-4b52-821f-c25b7ff37d18"). InnerVolumeSpecName "kube-api-access-vtwzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.553110 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-kube-api-access-dgl99" (OuterVolumeSpecName: "kube-api-access-dgl99") pod "9a262b0a-4d1c-46ba-b281-d95194a8bfa2" (UID: "9a262b0a-4d1c-46ba-b281-d95194a8bfa2"). InnerVolumeSpecName "kube-api-access-dgl99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.553199 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9a262b0a-4d1c-46ba-b281-d95194a8bfa2" (UID: "9a262b0a-4d1c-46ba-b281-d95194a8bfa2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.559724 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "22aba54a-8df1-4b52-821f-c25b7ff37d18" (UID: "22aba54a-8df1-4b52-821f-c25b7ff37d18"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.585480 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22aba54a-8df1-4b52-821f-c25b7ff37d18" (UID: "22aba54a-8df1-4b52-821f-c25b7ff37d18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.621496 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data" (OuterVolumeSpecName: "config-data") pod "22aba54a-8df1-4b52-821f-c25b7ff37d18" (UID: "22aba54a-8df1-4b52-821f-c25b7ff37d18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.628033 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a262b0a-4d1c-46ba-b281-d95194a8bfa2" (UID: "9a262b0a-4d1c-46ba-b281-d95194a8bfa2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.637921 4717 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.638019 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22aba54a-8df1-4b52-821f-c25b7ff37d18-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.638031 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.638040 4717 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.638050 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.638058 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgl99\" (UniqueName: \"kubernetes.io/projected/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-kube-api-access-dgl99\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.638070 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtwzr\" (UniqueName: \"kubernetes.io/projected/22aba54a-8df1-4b52-821f-c25b7ff37d18-kube-api-access-vtwzr\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.638083 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.638095 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22aba54a-8df1-4b52-821f-c25b7ff37d18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.660158 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data" (OuterVolumeSpecName: "config-data") pod "9a262b0a-4d1c-46ba-b281-d95194a8bfa2" (UID: "9a262b0a-4d1c-46ba-b281-d95194a8bfa2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.734919 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 12:08:24 crc kubenswrapper[4717]: I0218 12:08:24.740897 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a262b0a-4d1c-46ba-b281-d95194a8bfa2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.063827 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365febb7-ab5e-41fd-ad78-e6209e35653e" path="/var/lib/kubelet/pods/365febb7-ab5e-41fd-ad78-e6209e35653e/volumes" Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.146189 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79","Type":"ContainerStarted","Data":"8b9e8bdb70c0209cbc87b117fd02e82087cb33ef5d19e8eff23837d01ccfbcef"} Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.153900 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-698f4fc767-8w7f5" event={"ID":"9a262b0a-4d1c-46ba-b281-d95194a8bfa2","Type":"ContainerDied","Data":"f705caa3048b337b6c4aa5775a308f5d6fe2f88422c95e9936590ce9020f8256"} Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.153977 4717 scope.go:117] "RemoveContainer" containerID="8e37bf7109799268f1e7fd5e109350e2ca472fc89a6970764c2bae8e20bb8940" Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.154157 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-698f4fc767-8w7f5" Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.193547 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" event={"ID":"22aba54a-8df1-4b52-821f-c25b7ff37d18","Type":"ContainerDied","Data":"55bced52c29e1f4cf134a2b280794121b26e9d8e71f6d0182b6b6489567ca405"} Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.193700 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-859c78c974-8tzts" Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.247827 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-698f4fc767-8w7f5"] Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.262541 4717 scope.go:117] "RemoveContainer" containerID="70e84b0a71afc8b72c8be2939152aa334d677f9629452228d6a619a9e3118d52" Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.264865 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-698f4fc767-8w7f5"] Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.310598 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-859c78c974-8tzts"] Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.335542 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-859c78c974-8tzts"] Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.356822 4717 scope.go:117] "RemoveContainer" containerID="2412adada9e5f4150422e207087565feb8dd7c4139158d0fb4219e4afb062d0f" Feb 18 12:08:25 crc kubenswrapper[4717]: I0218 12:08:25.400676 4717 scope.go:117] "RemoveContainer" containerID="55a3e2f5c403b74702580205b3a8e2a10afa0e1b1f76a63d6f5dd5e89637f595" Feb 18 12:08:26 crc kubenswrapper[4717]: I0218 12:08:26.228028 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79","Type":"ContainerStarted","Data":"5b31aed126f443b53172f5bb563956425d6990d363118c11b3f0d55ec5081091"} Feb 18 12:08:26 crc kubenswrapper[4717]: I0218 12:08:26.892959 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:27 crc kubenswrapper[4717]: I0218 12:08:27.059891 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22aba54a-8df1-4b52-821f-c25b7ff37d18" path="/var/lib/kubelet/pods/22aba54a-8df1-4b52-821f-c25b7ff37d18/volumes" Feb 18 12:08:27 crc kubenswrapper[4717]: I0218 12:08:27.061767 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" path="/var/lib/kubelet/pods/9a262b0a-4d1c-46ba-b281-d95194a8bfa2/volumes" Feb 18 12:08:27 crc kubenswrapper[4717]: I0218 12:08:27.266295 4717 generic.go:334] "Generic (PLEG): container finished" podID="46f23333-73d2-4175-85f5-9ceb356a42ad" containerID="f7101619926bdd05c3451459c84628bc5fab3e9a42bf113b23a38797ae940586" exitCode=0 Feb 18 12:08:27 crc kubenswrapper[4717]: I0218 12:08:27.266380 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8584d7b78b-d8rzh" event={"ID":"46f23333-73d2-4175-85f5-9ceb356a42ad","Type":"ContainerDied","Data":"f7101619926bdd05c3451459c84628bc5fab3e9a42bf113b23a38797ae940586"} Feb 18 12:08:27 crc kubenswrapper[4717]: I0218 12:08:27.269603 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79","Type":"ContainerStarted","Data":"d92441fe166b80488e25dbdae97823d2d7883423a0850e25c9bf1fed4380680a"} Feb 18 12:08:29 crc kubenswrapper[4717]: I0218 12:08:29.418688 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:29 crc kubenswrapper[4717]: I0218 12:08:29.423080 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-586778dd75-mtms6" Feb 18 12:08:34 crc kubenswrapper[4717]: I0218 12:08:34.351370 4717 generic.go:334] "Generic (PLEG): container finished" podID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerID="0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce" exitCode=137 Feb 18 12:08:34 crc kubenswrapper[4717]: I0218 12:08:34.351454 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b5f4c76fb-t68w8" event={"ID":"619d0b9d-837a-4790-88cd-d2e11c6da6fc","Type":"ContainerDied","Data":"0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce"} Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.573375 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.642458 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-combined-ca-bundle\") pod \"46f23333-73d2-4175-85f5-9ceb356a42ad\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.643354 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4zv4\" (UniqueName: \"kubernetes.io/projected/46f23333-73d2-4175-85f5-9ceb356a42ad-kube-api-access-j4zv4\") pod \"46f23333-73d2-4175-85f5-9ceb356a42ad\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.643523 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-httpd-config\") pod \"46f23333-73d2-4175-85f5-9ceb356a42ad\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.643859 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-config\") pod \"46f23333-73d2-4175-85f5-9ceb356a42ad\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.644059 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-ovndb-tls-certs\") pod \"46f23333-73d2-4175-85f5-9ceb356a42ad\" (UID: \"46f23333-73d2-4175-85f5-9ceb356a42ad\") " Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.648323 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "46f23333-73d2-4175-85f5-9ceb356a42ad" (UID: "46f23333-73d2-4175-85f5-9ceb356a42ad"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.656487 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f23333-73d2-4175-85f5-9ceb356a42ad-kube-api-access-j4zv4" (OuterVolumeSpecName: "kube-api-access-j4zv4") pod "46f23333-73d2-4175-85f5-9ceb356a42ad" (UID: "46f23333-73d2-4175-85f5-9ceb356a42ad"). InnerVolumeSpecName "kube-api-access-j4zv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.725494 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-config" (OuterVolumeSpecName: "config") pod "46f23333-73d2-4175-85f5-9ceb356a42ad" (UID: "46f23333-73d2-4175-85f5-9ceb356a42ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.727661 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46f23333-73d2-4175-85f5-9ceb356a42ad" (UID: "46f23333-73d2-4175-85f5-9ceb356a42ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.746362 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.746401 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4zv4\" (UniqueName: \"kubernetes.io/projected/46f23333-73d2-4175-85f5-9ceb356a42ad-kube-api-access-j4zv4\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.746416 4717 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.746426 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.768089 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "46f23333-73d2-4175-85f5-9ceb356a42ad" (UID: "46f23333-73d2-4175-85f5-9ceb356a42ad"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:35 crc kubenswrapper[4717]: I0218 12:08:35.849471 4717 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46f23333-73d2-4175-85f5-9ceb356a42ad-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.382873 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79","Type":"ContainerStarted","Data":"bf272b7d700b67b4aaee4a5c9023ff30a80c6638ef74fa91d844026384be7b08"} Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.384955 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9b4ef341-6659-4283-81b4-78674dfd9fc8","Type":"ContainerStarted","Data":"446f4eb0b40f38bcd5008e80d1e44069d9aafea402dcfc75b5ff665ab7f0afa0"} Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.388625 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b5f4c76fb-t68w8" event={"ID":"619d0b9d-837a-4790-88cd-d2e11c6da6fc","Type":"ContainerStarted","Data":"2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2"} Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.388836 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b5f4c76fb-t68w8" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon-log" containerID="cri-o://ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494" gracePeriod=30 Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.389031 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b5f4c76fb-t68w8" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" containerID="cri-o://2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2" gracePeriod=30 Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.393948 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8584d7b78b-d8rzh" event={"ID":"46f23333-73d2-4175-85f5-9ceb356a42ad","Type":"ContainerDied","Data":"eb39ac4a629f3d69cd9553f60b3f43192fd9a762c8f82b201f6acde8c25b8494"} Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.394024 4717 scope.go:117] "RemoveContainer" containerID="8a9d28e4ecbfa886e508d891e0c6307e24cc6e57ed1a981a3dfd0784dac07955" Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.394242 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8584d7b78b-d8rzh" Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.434104 4717 scope.go:117] "RemoveContainer" containerID="f7101619926bdd05c3451459c84628bc5fab3e9a42bf113b23a38797ae940586" Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.440482 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.194438765 podStartE2EDuration="19.440451739s" podCreationTimestamp="2026-02-18 12:08:17 +0000 UTC" firstStartedPulling="2026-02-18 12:08:19.010764757 +0000 UTC m=+1133.412866073" lastFinishedPulling="2026-02-18 12:08:35.256777731 +0000 UTC m=+1149.658879047" observedRunningTime="2026-02-18 12:08:36.406835183 +0000 UTC m=+1150.808936499" watchObservedRunningTime="2026-02-18 12:08:36.440451739 +0000 UTC m=+1150.842553055" Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.476403 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8584d7b78b-d8rzh"] Feb 18 12:08:36 crc kubenswrapper[4717]: I0218 12:08:36.490166 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8584d7b78b-d8rzh"] Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.050668 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f23333-73d2-4175-85f5-9ceb356a42ad" path="/var/lib/kubelet/pods/46f23333-73d2-4175-85f5-9ceb356a42ad/volumes" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.961847 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-j4pk2"] Feb 18 12:08:37 crc kubenswrapper[4717]: E0218 12:08:37.962750 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22aba54a-8df1-4b52-821f-c25b7ff37d18" containerName="barbican-keystone-listener-log" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.962772 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="22aba54a-8df1-4b52-821f-c25b7ff37d18" containerName="barbican-keystone-listener-log" Feb 18 12:08:37 crc kubenswrapper[4717]: E0218 12:08:37.962788 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f23333-73d2-4175-85f5-9ceb356a42ad" containerName="neutron-api" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.962796 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f23333-73d2-4175-85f5-9ceb356a42ad" containerName="neutron-api" Feb 18 12:08:37 crc kubenswrapper[4717]: E0218 12:08:37.962814 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22aba54a-8df1-4b52-821f-c25b7ff37d18" containerName="barbican-keystone-listener" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.962822 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="22aba54a-8df1-4b52-821f-c25b7ff37d18" containerName="barbican-keystone-listener" Feb 18 12:08:37 crc kubenswrapper[4717]: E0218 12:08:37.962839 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" containerName="barbican-worker" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.962846 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" containerName="barbican-worker" Feb 18 12:08:37 crc kubenswrapper[4717]: E0218 12:08:37.962870 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f23333-73d2-4175-85f5-9ceb356a42ad" containerName="neutron-httpd" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.962877 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f23333-73d2-4175-85f5-9ceb356a42ad" containerName="neutron-httpd" Feb 18 12:08:37 crc kubenswrapper[4717]: E0218 12:08:37.962900 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" containerName="barbican-worker-log" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.962907 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" containerName="barbican-worker-log" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.963118 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="22aba54a-8df1-4b52-821f-c25b7ff37d18" containerName="barbican-keystone-listener" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.963137 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" containerName="barbican-worker" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.963153 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="22aba54a-8df1-4b52-821f-c25b7ff37d18" containerName="barbican-keystone-listener-log" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.963160 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f23333-73d2-4175-85f5-9ceb356a42ad" containerName="neutron-httpd" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.963176 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f23333-73d2-4175-85f5-9ceb356a42ad" containerName="neutron-api" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.963183 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a262b0a-4d1c-46ba-b281-d95194a8bfa2" containerName="barbican-worker-log" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.963927 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j4pk2" Feb 18 12:08:37 crc kubenswrapper[4717]: I0218 12:08:37.985336 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j4pk2"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.121947 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27f7m\" (UniqueName: \"kubernetes.io/projected/fe18f0e8-9d43-49ba-afd5-854b7540e855-kube-api-access-27f7m\") pod \"nova-api-db-create-j4pk2\" (UID: \"fe18f0e8-9d43-49ba-afd5-854b7540e855\") " pod="openstack/nova-api-db-create-j4pk2" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.122563 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe18f0e8-9d43-49ba-afd5-854b7540e855-operator-scripts\") pod \"nova-api-db-create-j4pk2\" (UID: \"fe18f0e8-9d43-49ba-afd5-854b7540e855\") " pod="openstack/nova-api-db-create-j4pk2" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.208755 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6jnqg"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.210706 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6jnqg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.220427 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6jnqg"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.229477 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27f7m\" (UniqueName: \"kubernetes.io/projected/fe18f0e8-9d43-49ba-afd5-854b7540e855-kube-api-access-27f7m\") pod \"nova-api-db-create-j4pk2\" (UID: \"fe18f0e8-9d43-49ba-afd5-854b7540e855\") " pod="openstack/nova-api-db-create-j4pk2" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.229617 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe18f0e8-9d43-49ba-afd5-854b7540e855-operator-scripts\") pod \"nova-api-db-create-j4pk2\" (UID: \"fe18f0e8-9d43-49ba-afd5-854b7540e855\") " pod="openstack/nova-api-db-create-j4pk2" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.230368 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe18f0e8-9d43-49ba-afd5-854b7540e855-operator-scripts\") pod \"nova-api-db-create-j4pk2\" (UID: \"fe18f0e8-9d43-49ba-afd5-854b7540e855\") " pod="openstack/nova-api-db-create-j4pk2" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.285776 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27f7m\" (UniqueName: \"kubernetes.io/projected/fe18f0e8-9d43-49ba-afd5-854b7540e855-kube-api-access-27f7m\") pod \"nova-api-db-create-j4pk2\" (UID: \"fe18f0e8-9d43-49ba-afd5-854b7540e855\") " pod="openstack/nova-api-db-create-j4pk2" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.325551 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-z5gs7"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.327321 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z5gs7" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.332491 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rztr8\" (UniqueName: \"kubernetes.io/projected/e12eeb3e-8069-404c-ab57-9a182bd555e4-kube-api-access-rztr8\") pod \"nova-cell0-db-create-6jnqg\" (UID: \"e12eeb3e-8069-404c-ab57-9a182bd555e4\") " pod="openstack/nova-cell0-db-create-6jnqg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.332543 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12eeb3e-8069-404c-ab57-9a182bd555e4-operator-scripts\") pod \"nova-cell0-db-create-6jnqg\" (UID: \"e12eeb3e-8069-404c-ab57-9a182bd555e4\") " pod="openstack/nova-cell0-db-create-6jnqg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.346501 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z5gs7"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.426085 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79","Type":"ContainerStarted","Data":"294b65c15020b33e8d2cf5ff3aca9a2afd10a3361f4895097a15d03754fe4a74"} Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.426390 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.426390 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="ceilometer-central-agent" containerID="cri-o://5b31aed126f443b53172f5bb563956425d6990d363118c11b3f0d55ec5081091" gracePeriod=30 Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.426467 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="ceilometer-notification-agent" containerID="cri-o://d92441fe166b80488e25dbdae97823d2d7883423a0850e25c9bf1fed4380680a" gracePeriod=30 Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.426445 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="sg-core" containerID="cri-o://bf272b7d700b67b4aaee4a5c9023ff30a80c6638ef74fa91d844026384be7b08" gracePeriod=30 Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.426443 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="proxy-httpd" containerID="cri-o://294b65c15020b33e8d2cf5ff3aca9a2afd10a3361f4895097a15d03754fe4a74" gracePeriod=30 Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.443667 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rztr8\" (UniqueName: \"kubernetes.io/projected/e12eeb3e-8069-404c-ab57-9a182bd555e4-kube-api-access-rztr8\") pod \"nova-cell0-db-create-6jnqg\" (UID: \"e12eeb3e-8069-404c-ab57-9a182bd555e4\") " pod="openstack/nova-cell0-db-create-6jnqg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.443801 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12eeb3e-8069-404c-ab57-9a182bd555e4-operator-scripts\") pod \"nova-cell0-db-create-6jnqg\" (UID: \"e12eeb3e-8069-404c-ab57-9a182bd555e4\") " pod="openstack/nova-cell0-db-create-6jnqg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.443998 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrrk\" (UniqueName: \"kubernetes.io/projected/a3ff3a55-d178-47f2-9d17-069494943080-kube-api-access-dwrrk\") pod \"nova-cell1-db-create-z5gs7\" (UID: \"a3ff3a55-d178-47f2-9d17-069494943080\") " pod="openstack/nova-cell1-db-create-z5gs7" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.444137 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3ff3a55-d178-47f2-9d17-069494943080-operator-scripts\") pod \"nova-cell1-db-create-z5gs7\" (UID: \"a3ff3a55-d178-47f2-9d17-069494943080\") " pod="openstack/nova-cell1-db-create-z5gs7" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.445028 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12eeb3e-8069-404c-ab57-9a182bd555e4-operator-scripts\") pod \"nova-cell0-db-create-6jnqg\" (UID: \"e12eeb3e-8069-404c-ab57-9a182bd555e4\") " pod="openstack/nova-cell0-db-create-6jnqg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.469925 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0466-account-create-update-m65cg"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.479675 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0466-account-create-update-m65cg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.482273 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rztr8\" (UniqueName: \"kubernetes.io/projected/e12eeb3e-8069-404c-ab57-9a182bd555e4-kube-api-access-rztr8\") pod \"nova-cell0-db-create-6jnqg\" (UID: \"e12eeb3e-8069-404c-ab57-9a182bd555e4\") " pod="openstack/nova-cell0-db-create-6jnqg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.490397 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.535386 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6jnqg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.557521 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-operator-scripts\") pod \"nova-api-0466-account-create-update-m65cg\" (UID: \"0bb8ac16-ce77-4ce1-badf-2d4d610757f3\") " pod="openstack/nova-api-0466-account-create-update-m65cg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.557757 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrrk\" (UniqueName: \"kubernetes.io/projected/a3ff3a55-d178-47f2-9d17-069494943080-kube-api-access-dwrrk\") pod \"nova-cell1-db-create-z5gs7\" (UID: \"a3ff3a55-d178-47f2-9d17-069494943080\") " pod="openstack/nova-cell1-db-create-z5gs7" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.557837 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pltt2\" (UniqueName: \"kubernetes.io/projected/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-kube-api-access-pltt2\") pod \"nova-api-0466-account-create-update-m65cg\" (UID: \"0bb8ac16-ce77-4ce1-badf-2d4d610757f3\") " pod="openstack/nova-api-0466-account-create-update-m65cg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.557979 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3ff3a55-d178-47f2-9d17-069494943080-operator-scripts\") pod \"nova-cell1-db-create-z5gs7\" (UID: \"a3ff3a55-d178-47f2-9d17-069494943080\") " pod="openstack/nova-cell1-db-create-z5gs7" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.559045 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3ff3a55-d178-47f2-9d17-069494943080-operator-scripts\") pod \"nova-cell1-db-create-z5gs7\" (UID: \"a3ff3a55-d178-47f2-9d17-069494943080\") " pod="openstack/nova-cell1-db-create-z5gs7" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.568122 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.528649136 podStartE2EDuration="15.568085552s" podCreationTimestamp="2026-02-18 12:08:23 +0000 UTC" firstStartedPulling="2026-02-18 12:08:24.284584613 +0000 UTC m=+1138.686685929" lastFinishedPulling="2026-02-18 12:08:37.324021029 +0000 UTC m=+1151.726122345" observedRunningTime="2026-02-18 12:08:38.473667668 +0000 UTC m=+1152.875768984" watchObservedRunningTime="2026-02-18 12:08:38.568085552 +0000 UTC m=+1152.970186868" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.569401 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0466-account-create-update-m65cg"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.583800 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j4pk2" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.600803 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrrk\" (UniqueName: \"kubernetes.io/projected/a3ff3a55-d178-47f2-9d17-069494943080-kube-api-access-dwrrk\") pod \"nova-cell1-db-create-z5gs7\" (UID: \"a3ff3a55-d178-47f2-9d17-069494943080\") " pod="openstack/nova-cell1-db-create-z5gs7" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.658167 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z5gs7" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.661643 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-operator-scripts\") pod \"nova-api-0466-account-create-update-m65cg\" (UID: \"0bb8ac16-ce77-4ce1-badf-2d4d610757f3\") " pod="openstack/nova-api-0466-account-create-update-m65cg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.661783 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pltt2\" (UniqueName: \"kubernetes.io/projected/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-kube-api-access-pltt2\") pod \"nova-api-0466-account-create-update-m65cg\" (UID: \"0bb8ac16-ce77-4ce1-badf-2d4d610757f3\") " pod="openstack/nova-api-0466-account-create-update-m65cg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.663736 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-operator-scripts\") pod \"nova-api-0466-account-create-update-m65cg\" (UID: \"0bb8ac16-ce77-4ce1-badf-2d4d610757f3\") " pod="openstack/nova-api-0466-account-create-update-m65cg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.663801 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7188-account-create-update-vmgn4"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.666408 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7188-account-create-update-vmgn4" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.669032 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.673357 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7188-account-create-update-vmgn4"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.745323 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pltt2\" (UniqueName: \"kubernetes.io/projected/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-kube-api-access-pltt2\") pod \"nova-api-0466-account-create-update-m65cg\" (UID: \"0bb8ac16-ce77-4ce1-badf-2d4d610757f3\") " pod="openstack/nova-api-0466-account-create-update-m65cg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.765338 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a354e546-d49a-4825-a255-ce8888c40e42-operator-scripts\") pod \"nova-cell0-7188-account-create-update-vmgn4\" (UID: \"a354e546-d49a-4825-a255-ce8888c40e42\") " pod="openstack/nova-cell0-7188-account-create-update-vmgn4" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.765396 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27v4q\" (UniqueName: \"kubernetes.io/projected/a354e546-d49a-4825-a255-ce8888c40e42-kube-api-access-27v4q\") pod \"nova-cell0-7188-account-create-update-vmgn4\" (UID: \"a354e546-d49a-4825-a255-ce8888c40e42\") " pod="openstack/nova-cell0-7188-account-create-update-vmgn4" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.785040 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3290-account-create-update-rg9fw"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.786389 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3290-account-create-update-rg9fw" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.790986 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.802927 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3290-account-create-update-rg9fw"] Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.804866 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0466-account-create-update-m65cg" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.868069 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a354e546-d49a-4825-a255-ce8888c40e42-operator-scripts\") pod \"nova-cell0-7188-account-create-update-vmgn4\" (UID: \"a354e546-d49a-4825-a255-ce8888c40e42\") " pod="openstack/nova-cell0-7188-account-create-update-vmgn4" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.870181 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27v4q\" (UniqueName: \"kubernetes.io/projected/a354e546-d49a-4825-a255-ce8888c40e42-kube-api-access-27v4q\") pod \"nova-cell0-7188-account-create-update-vmgn4\" (UID: \"a354e546-d49a-4825-a255-ce8888c40e42\") " pod="openstack/nova-cell0-7188-account-create-update-vmgn4" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.870385 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjtpw\" (UniqueName: \"kubernetes.io/projected/ada73f5c-ac32-44b6-9af8-dbc560004935-kube-api-access-kjtpw\") pod \"nova-cell1-3290-account-create-update-rg9fw\" (UID: \"ada73f5c-ac32-44b6-9af8-dbc560004935\") " pod="openstack/nova-cell1-3290-account-create-update-rg9fw" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.869747 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a354e546-d49a-4825-a255-ce8888c40e42-operator-scripts\") pod \"nova-cell0-7188-account-create-update-vmgn4\" (UID: \"a354e546-d49a-4825-a255-ce8888c40e42\") " pod="openstack/nova-cell0-7188-account-create-update-vmgn4" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.870771 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ada73f5c-ac32-44b6-9af8-dbc560004935-operator-scripts\") pod \"nova-cell1-3290-account-create-update-rg9fw\" (UID: \"ada73f5c-ac32-44b6-9af8-dbc560004935\") " pod="openstack/nova-cell1-3290-account-create-update-rg9fw" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.912204 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27v4q\" (UniqueName: \"kubernetes.io/projected/a354e546-d49a-4825-a255-ce8888c40e42-kube-api-access-27v4q\") pod \"nova-cell0-7188-account-create-update-vmgn4\" (UID: \"a354e546-d49a-4825-a255-ce8888c40e42\") " pod="openstack/nova-cell0-7188-account-create-update-vmgn4" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.973229 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ada73f5c-ac32-44b6-9af8-dbc560004935-operator-scripts\") pod \"nova-cell1-3290-account-create-update-rg9fw\" (UID: \"ada73f5c-ac32-44b6-9af8-dbc560004935\") " pod="openstack/nova-cell1-3290-account-create-update-rg9fw" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.973405 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjtpw\" (UniqueName: \"kubernetes.io/projected/ada73f5c-ac32-44b6-9af8-dbc560004935-kube-api-access-kjtpw\") pod \"nova-cell1-3290-account-create-update-rg9fw\" (UID: \"ada73f5c-ac32-44b6-9af8-dbc560004935\") " pod="openstack/nova-cell1-3290-account-create-update-rg9fw" Feb 18 12:08:38 crc kubenswrapper[4717]: I0218 12:08:38.975094 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ada73f5c-ac32-44b6-9af8-dbc560004935-operator-scripts\") pod \"nova-cell1-3290-account-create-update-rg9fw\" (UID: \"ada73f5c-ac32-44b6-9af8-dbc560004935\") " pod="openstack/nova-cell1-3290-account-create-update-rg9fw" Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.003110 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjtpw\" (UniqueName: \"kubernetes.io/projected/ada73f5c-ac32-44b6-9af8-dbc560004935-kube-api-access-kjtpw\") pod \"nova-cell1-3290-account-create-update-rg9fw\" (UID: \"ada73f5c-ac32-44b6-9af8-dbc560004935\") " pod="openstack/nova-cell1-3290-account-create-update-rg9fw" Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.149284 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7188-account-create-update-vmgn4" Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.184917 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3290-account-create-update-rg9fw" Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.245330 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6jnqg"] Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.377068 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j4pk2"] Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.473488 4717 generic.go:334] "Generic (PLEG): container finished" podID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerID="294b65c15020b33e8d2cf5ff3aca9a2afd10a3361f4895097a15d03754fe4a74" exitCode=0 Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.473530 4717 generic.go:334] "Generic (PLEG): container finished" podID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerID="bf272b7d700b67b4aaee4a5c9023ff30a80c6638ef74fa91d844026384be7b08" exitCode=2 Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.473595 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79","Type":"ContainerDied","Data":"294b65c15020b33e8d2cf5ff3aca9a2afd10a3361f4895097a15d03754fe4a74"} Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.473636 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79","Type":"ContainerDied","Data":"bf272b7d700b67b4aaee4a5c9023ff30a80c6638ef74fa91d844026384be7b08"} Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.474962 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j4pk2" event={"ID":"fe18f0e8-9d43-49ba-afd5-854b7540e855","Type":"ContainerStarted","Data":"3b1d30ad25cc99ea55a74d7cfde2c9177738b8f5309c1cbfcb6c47afb28106d6"} Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.476329 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z5gs7"] Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.477220 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6jnqg" event={"ID":"e12eeb3e-8069-404c-ab57-9a182bd555e4","Type":"ContainerStarted","Data":"4cedec490ca90e4708861c29a4c62d4983a78c533b7891870d5f9867f8f04da6"} Feb 18 12:08:39 crc kubenswrapper[4717]: W0218 12:08:39.479179 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3ff3a55_d178_47f2_9d17_069494943080.slice/crio-9ecdb6151fd324d4cc03a95831fd9b41249f15ad2d65988f3dbb873e26940c2a WatchSource:0}: Error finding container 9ecdb6151fd324d4cc03a95831fd9b41249f15ad2d65988f3dbb873e26940c2a: Status 404 returned error can't find the container with id 9ecdb6151fd324d4cc03a95831fd9b41249f15ad2d65988f3dbb873e26940c2a Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.559029 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0466-account-create-update-m65cg"] Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.767517 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7188-account-create-update-vmgn4"] Feb 18 12:08:39 crc kubenswrapper[4717]: I0218 12:08:39.890752 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3290-account-create-update-rg9fw"] Feb 18 12:08:39 crc kubenswrapper[4717]: W0218 12:08:39.946574 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podada73f5c_ac32_44b6_9af8_dbc560004935.slice/crio-4ef9d1f34634cf60816e62d9ff2b56b54e6928358ff6c9180abd116ab4c76889 WatchSource:0}: Error finding container 4ef9d1f34634cf60816e62d9ff2b56b54e6928358ff6c9180abd116ab4c76889: Status 404 returned error can't find the container with id 4ef9d1f34634cf60816e62d9ff2b56b54e6928358ff6c9180abd116ab4c76889 Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.501325 4717 generic.go:334] "Generic (PLEG): container finished" podID="fe18f0e8-9d43-49ba-afd5-854b7540e855" containerID="0ac44b0853d0b61a571326e4b6962a5424cf1f0221e07e34c5ef08d862e2d3ba" exitCode=0 Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.501429 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j4pk2" event={"ID":"fe18f0e8-9d43-49ba-afd5-854b7540e855","Type":"ContainerDied","Data":"0ac44b0853d0b61a571326e4b6962a5424cf1f0221e07e34c5ef08d862e2d3ba"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.508106 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3290-account-create-update-rg9fw" event={"ID":"ada73f5c-ac32-44b6-9af8-dbc560004935","Type":"ContainerStarted","Data":"a7ac6d63963063aa0411fb3ea96ea0b7e05f9efb8616fd62be45aed561c4b6e6"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.508162 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3290-account-create-update-rg9fw" event={"ID":"ada73f5c-ac32-44b6-9af8-dbc560004935","Type":"ContainerStarted","Data":"4ef9d1f34634cf60816e62d9ff2b56b54e6928358ff6c9180abd116ab4c76889"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.512785 4717 generic.go:334] "Generic (PLEG): container finished" podID="e12eeb3e-8069-404c-ab57-9a182bd555e4" containerID="1cf9fbbd529a32a4ca3ab77705d6f29f37d26bf9aa75c488d20b9c5693ed36e2" exitCode=0 Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.512880 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6jnqg" event={"ID":"e12eeb3e-8069-404c-ab57-9a182bd555e4","Type":"ContainerDied","Data":"1cf9fbbd529a32a4ca3ab77705d6f29f37d26bf9aa75c488d20b9c5693ed36e2"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.516448 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7188-account-create-update-vmgn4" event={"ID":"a354e546-d49a-4825-a255-ce8888c40e42","Type":"ContainerStarted","Data":"b737f44033956b1ef5d694279c6bdc7ff4ae897d5b739497b5dde5f1166cca7c"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.516499 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7188-account-create-update-vmgn4" event={"ID":"a354e546-d49a-4825-a255-ce8888c40e42","Type":"ContainerStarted","Data":"4d9afa8dbc2cdd2ea5f416e9686e07ba92c0e3069d9947e3c0f92a71eb6f0980"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.524803 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0466-account-create-update-m65cg" event={"ID":"0bb8ac16-ce77-4ce1-badf-2d4d610757f3","Type":"ContainerStarted","Data":"8268418fc337d5566944009346998b60339d7fa249c2abf824b7d0733197da11"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.524868 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0466-account-create-update-m65cg" event={"ID":"0bb8ac16-ce77-4ce1-badf-2d4d610757f3","Type":"ContainerStarted","Data":"7a4e01442f642b0147f33fb7329527a803c32ffe767150cd82012b8a4514b946"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.530531 4717 generic.go:334] "Generic (PLEG): container finished" podID="a3ff3a55-d178-47f2-9d17-069494943080" containerID="be01a91fc59414df545e3b7a7eb41d892e94a99b3d7dc4c5d68a74fdacf07184" exitCode=0 Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.530665 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z5gs7" event={"ID":"a3ff3a55-d178-47f2-9d17-069494943080","Type":"ContainerDied","Data":"be01a91fc59414df545e3b7a7eb41d892e94a99b3d7dc4c5d68a74fdacf07184"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.530696 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z5gs7" event={"ID":"a3ff3a55-d178-47f2-9d17-069494943080","Type":"ContainerStarted","Data":"9ecdb6151fd324d4cc03a95831fd9b41249f15ad2d65988f3dbb873e26940c2a"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.539407 4717 generic.go:334] "Generic (PLEG): container finished" podID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerID="d92441fe166b80488e25dbdae97823d2d7883423a0850e25c9bf1fed4380680a" exitCode=0 Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.539460 4717 generic.go:334] "Generic (PLEG): container finished" podID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerID="5b31aed126f443b53172f5bb563956425d6990d363118c11b3f0d55ec5081091" exitCode=0 Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.539501 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79","Type":"ContainerDied","Data":"d92441fe166b80488e25dbdae97823d2d7883423a0850e25c9bf1fed4380680a"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.539543 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79","Type":"ContainerDied","Data":"5b31aed126f443b53172f5bb563956425d6990d363118c11b3f0d55ec5081091"} Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.567658 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3290-account-create-update-rg9fw" podStartSLOduration=2.567630403 podStartE2EDuration="2.567630403s" podCreationTimestamp="2026-02-18 12:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:40.546357005 +0000 UTC m=+1154.948458321" watchObservedRunningTime="2026-02-18 12:08:40.567630403 +0000 UTC m=+1154.969731719" Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.585491 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7188-account-create-update-vmgn4" podStartSLOduration=2.585466692 podStartE2EDuration="2.585466692s" podCreationTimestamp="2026-02-18 12:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:40.573630348 +0000 UTC m=+1154.975731674" watchObservedRunningTime="2026-02-18 12:08:40.585466692 +0000 UTC m=+1154.987568008" Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.669195 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0466-account-create-update-m65cg" podStartSLOduration=2.669167703 podStartE2EDuration="2.669167703s" podCreationTimestamp="2026-02-18 12:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:08:40.656066843 +0000 UTC m=+1155.058168159" watchObservedRunningTime="2026-02-18 12:08:40.669167703 +0000 UTC m=+1155.071269019" Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.929051 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.956081 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-config-data\") pod \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.956361 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kswl\" (UniqueName: \"kubernetes.io/projected/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-kube-api-access-7kswl\") pod \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.956556 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-scripts\") pod \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.956666 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-log-httpd\") pod \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.956707 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-run-httpd\") pod \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.956730 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-sg-core-conf-yaml\") pod \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.956807 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-combined-ca-bundle\") pod \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\" (UID: \"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79\") " Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.957516 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" (UID: "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.957563 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" (UID: "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.958749 4717 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.958805 4717 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.969828 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-kube-api-access-7kswl" (OuterVolumeSpecName: "kube-api-access-7kswl") pod "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" (UID: "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79"). InnerVolumeSpecName "kube-api-access-7kswl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:40 crc kubenswrapper[4717]: I0218 12:08:40.993118 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-scripts" (OuterVolumeSpecName: "scripts") pod "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" (UID: "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.031453 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" (UID: "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.077880 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.078134 4717 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.078274 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kswl\" (UniqueName: \"kubernetes.io/projected/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-kube-api-access-7kswl\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.097362 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-config-data" (OuterVolumeSpecName: "config-data") pod "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" (UID: "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.107915 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" (UID: "ceabfb4a-86c0-4bb9-952c-5b7193cc6b79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.180095 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.180146 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.549815 4717 generic.go:334] "Generic (PLEG): container finished" podID="a354e546-d49a-4825-a255-ce8888c40e42" containerID="b737f44033956b1ef5d694279c6bdc7ff4ae897d5b739497b5dde5f1166cca7c" exitCode=0 Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.549945 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7188-account-create-update-vmgn4" event={"ID":"a354e546-d49a-4825-a255-ce8888c40e42","Type":"ContainerDied","Data":"b737f44033956b1ef5d694279c6bdc7ff4ae897d5b739497b5dde5f1166cca7c"} Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.555191 4717 generic.go:334] "Generic (PLEG): container finished" podID="0bb8ac16-ce77-4ce1-badf-2d4d610757f3" containerID="8268418fc337d5566944009346998b60339d7fa249c2abf824b7d0733197da11" exitCode=0 Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.555397 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0466-account-create-update-m65cg" event={"ID":"0bb8ac16-ce77-4ce1-badf-2d4d610757f3","Type":"ContainerDied","Data":"8268418fc337d5566944009346998b60339d7fa249c2abf824b7d0733197da11"} Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.558771 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceabfb4a-86c0-4bb9-952c-5b7193cc6b79","Type":"ContainerDied","Data":"8b9e8bdb70c0209cbc87b117fd02e82087cb33ef5d19e8eff23837d01ccfbcef"} Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.558830 4717 scope.go:117] "RemoveContainer" containerID="294b65c15020b33e8d2cf5ff3aca9a2afd10a3361f4895097a15d03754fe4a74" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.558911 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.561798 4717 generic.go:334] "Generic (PLEG): container finished" podID="ada73f5c-ac32-44b6-9af8-dbc560004935" containerID="a7ac6d63963063aa0411fb3ea96ea0b7e05f9efb8616fd62be45aed561c4b6e6" exitCode=0 Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.562241 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3290-account-create-update-rg9fw" event={"ID":"ada73f5c-ac32-44b6-9af8-dbc560004935","Type":"ContainerDied","Data":"a7ac6d63963063aa0411fb3ea96ea0b7e05f9efb8616fd62be45aed561c4b6e6"} Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.599253 4717 scope.go:117] "RemoveContainer" containerID="bf272b7d700b67b4aaee4a5c9023ff30a80c6638ef74fa91d844026384be7b08" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.680741 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.690800 4717 scope.go:117] "RemoveContainer" containerID="d92441fe166b80488e25dbdae97823d2d7883423a0850e25c9bf1fed4380680a" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.702219 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.730200 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:41 crc kubenswrapper[4717]: E0218 12:08:41.730870 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="proxy-httpd" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.730899 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="proxy-httpd" Feb 18 12:08:41 crc kubenswrapper[4717]: E0218 12:08:41.730917 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="sg-core" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.730926 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="sg-core" Feb 18 12:08:41 crc kubenswrapper[4717]: E0218 12:08:41.730963 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="ceilometer-notification-agent" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.730971 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="ceilometer-notification-agent" Feb 18 12:08:41 crc kubenswrapper[4717]: E0218 12:08:41.730994 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="ceilometer-central-agent" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.731003 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="ceilometer-central-agent" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.731244 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="proxy-httpd" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.731306 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="ceilometer-central-agent" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.731345 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="ceilometer-notification-agent" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.731367 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" containerName="sg-core" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.733715 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.740059 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.742335 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.752663 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.794228 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.794792 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-run-httpd\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.800863 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-scripts\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.800976 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.801219 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-config-data\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.801334 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9ch\" (UniqueName: \"kubernetes.io/projected/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-kube-api-access-pd9ch\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.801849 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-log-httpd\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.803456 4717 scope.go:117] "RemoveContainer" containerID="5b31aed126f443b53172f5bb563956425d6990d363118c11b3f0d55ec5081091" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.904567 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-config-data\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.904634 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd9ch\" (UniqueName: \"kubernetes.io/projected/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-kube-api-access-pd9ch\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.904697 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-log-httpd\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.904753 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.904787 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-run-httpd\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.904816 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-scripts\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.904836 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.905990 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-run-httpd\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.908693 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-log-httpd\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.915769 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.915980 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-config-data\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.918648 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-scripts\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.920117 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:41 crc kubenswrapper[4717]: I0218 12:08:41.932063 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd9ch\" (UniqueName: \"kubernetes.io/projected/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-kube-api-access-pd9ch\") pod \"ceilometer-0\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " pod="openstack/ceilometer-0" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.074847 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.190035 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6jnqg" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.205799 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j4pk2" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.210860 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z5gs7" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.220619 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rztr8\" (UniqueName: \"kubernetes.io/projected/e12eeb3e-8069-404c-ab57-9a182bd555e4-kube-api-access-rztr8\") pod \"e12eeb3e-8069-404c-ab57-9a182bd555e4\" (UID: \"e12eeb3e-8069-404c-ab57-9a182bd555e4\") " Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.220752 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12eeb3e-8069-404c-ab57-9a182bd555e4-operator-scripts\") pod \"e12eeb3e-8069-404c-ab57-9a182bd555e4\" (UID: \"e12eeb3e-8069-404c-ab57-9a182bd555e4\") " Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.221937 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12eeb3e-8069-404c-ab57-9a182bd555e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e12eeb3e-8069-404c-ab57-9a182bd555e4" (UID: "e12eeb3e-8069-404c-ab57-9a182bd555e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.241621 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12eeb3e-8069-404c-ab57-9a182bd555e4-kube-api-access-rztr8" (OuterVolumeSpecName: "kube-api-access-rztr8") pod "e12eeb3e-8069-404c-ab57-9a182bd555e4" (UID: "e12eeb3e-8069-404c-ab57-9a182bd555e4"). InnerVolumeSpecName "kube-api-access-rztr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.323208 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27f7m\" (UniqueName: \"kubernetes.io/projected/fe18f0e8-9d43-49ba-afd5-854b7540e855-kube-api-access-27f7m\") pod \"fe18f0e8-9d43-49ba-afd5-854b7540e855\" (UID: \"fe18f0e8-9d43-49ba-afd5-854b7540e855\") " Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.323458 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe18f0e8-9d43-49ba-afd5-854b7540e855-operator-scripts\") pod \"fe18f0e8-9d43-49ba-afd5-854b7540e855\" (UID: \"fe18f0e8-9d43-49ba-afd5-854b7540e855\") " Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.323510 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwrrk\" (UniqueName: \"kubernetes.io/projected/a3ff3a55-d178-47f2-9d17-069494943080-kube-api-access-dwrrk\") pod \"a3ff3a55-d178-47f2-9d17-069494943080\" (UID: \"a3ff3a55-d178-47f2-9d17-069494943080\") " Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.323532 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3ff3a55-d178-47f2-9d17-069494943080-operator-scripts\") pod \"a3ff3a55-d178-47f2-9d17-069494943080\" (UID: \"a3ff3a55-d178-47f2-9d17-069494943080\") " Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.325286 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ff3a55-d178-47f2-9d17-069494943080-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3ff3a55-d178-47f2-9d17-069494943080" (UID: "a3ff3a55-d178-47f2-9d17-069494943080"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.325452 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe18f0e8-9d43-49ba-afd5-854b7540e855-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe18f0e8-9d43-49ba-afd5-854b7540e855" (UID: "fe18f0e8-9d43-49ba-afd5-854b7540e855"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.326965 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12eeb3e-8069-404c-ab57-9a182bd555e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.326989 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe18f0e8-9d43-49ba-afd5-854b7540e855-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.327001 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3ff3a55-d178-47f2-9d17-069494943080-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.327013 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rztr8\" (UniqueName: \"kubernetes.io/projected/e12eeb3e-8069-404c-ab57-9a182bd555e4-kube-api-access-rztr8\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.330663 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ff3a55-d178-47f2-9d17-069494943080-kube-api-access-dwrrk" (OuterVolumeSpecName: "kube-api-access-dwrrk") pod "a3ff3a55-d178-47f2-9d17-069494943080" (UID: "a3ff3a55-d178-47f2-9d17-069494943080"). InnerVolumeSpecName "kube-api-access-dwrrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.330739 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe18f0e8-9d43-49ba-afd5-854b7540e855-kube-api-access-27f7m" (OuterVolumeSpecName: "kube-api-access-27f7m") pod "fe18f0e8-9d43-49ba-afd5-854b7540e855" (UID: "fe18f0e8-9d43-49ba-afd5-854b7540e855"). InnerVolumeSpecName "kube-api-access-27f7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.429352 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwrrk\" (UniqueName: \"kubernetes.io/projected/a3ff3a55-d178-47f2-9d17-069494943080-kube-api-access-dwrrk\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.429392 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27f7m\" (UniqueName: \"kubernetes.io/projected/fe18f0e8-9d43-49ba-afd5-854b7540e855-kube-api-access-27f7m\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.575878 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6jnqg" event={"ID":"e12eeb3e-8069-404c-ab57-9a182bd555e4","Type":"ContainerDied","Data":"4cedec490ca90e4708861c29a4c62d4983a78c533b7891870d5f9867f8f04da6"} Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.576325 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cedec490ca90e4708861c29a4c62d4983a78c533b7891870d5f9867f8f04da6" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.576186 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6jnqg" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.578333 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z5gs7" event={"ID":"a3ff3a55-d178-47f2-9d17-069494943080","Type":"ContainerDied","Data":"9ecdb6151fd324d4cc03a95831fd9b41249f15ad2d65988f3dbb873e26940c2a"} Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.578384 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ecdb6151fd324d4cc03a95831fd9b41249f15ad2d65988f3dbb873e26940c2a" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.578437 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z5gs7" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.589858 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j4pk2" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.589960 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j4pk2" event={"ID":"fe18f0e8-9d43-49ba-afd5-854b7540e855","Type":"ContainerDied","Data":"3b1d30ad25cc99ea55a74d7cfde2c9177738b8f5309c1cbfcb6c47afb28106d6"} Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.590007 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b1d30ad25cc99ea55a74d7cfde2c9177738b8f5309c1cbfcb6c47afb28106d6" Feb 18 12:08:42 crc kubenswrapper[4717]: I0218 12:08:42.618958 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.087357 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3290-account-create-update-rg9fw" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.093192 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceabfb4a-86c0-4bb9-952c-5b7193cc6b79" path="/var/lib/kubelet/pods/ceabfb4a-86c0-4bb9-952c-5b7193cc6b79/volumes" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.205450 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.226860 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7188-account-create-update-vmgn4" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.264462 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ada73f5c-ac32-44b6-9af8-dbc560004935-operator-scripts\") pod \"ada73f5c-ac32-44b6-9af8-dbc560004935\" (UID: \"ada73f5c-ac32-44b6-9af8-dbc560004935\") " Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.264551 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27v4q\" (UniqueName: \"kubernetes.io/projected/a354e546-d49a-4825-a255-ce8888c40e42-kube-api-access-27v4q\") pod \"a354e546-d49a-4825-a255-ce8888c40e42\" (UID: \"a354e546-d49a-4825-a255-ce8888c40e42\") " Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.264606 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a354e546-d49a-4825-a255-ce8888c40e42-operator-scripts\") pod \"a354e546-d49a-4825-a255-ce8888c40e42\" (UID: \"a354e546-d49a-4825-a255-ce8888c40e42\") " Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.264628 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjtpw\" (UniqueName: \"kubernetes.io/projected/ada73f5c-ac32-44b6-9af8-dbc560004935-kube-api-access-kjtpw\") pod \"ada73f5c-ac32-44b6-9af8-dbc560004935\" (UID: \"ada73f5c-ac32-44b6-9af8-dbc560004935\") " Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.273177 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0466-account-create-update-m65cg" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.274049 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ada73f5c-ac32-44b6-9af8-dbc560004935-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ada73f5c-ac32-44b6-9af8-dbc560004935" (UID: "ada73f5c-ac32-44b6-9af8-dbc560004935"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.275821 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a354e546-d49a-4825-a255-ce8888c40e42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a354e546-d49a-4825-a255-ce8888c40e42" (UID: "a354e546-d49a-4825-a255-ce8888c40e42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.279002 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ada73f5c-ac32-44b6-9af8-dbc560004935-kube-api-access-kjtpw" (OuterVolumeSpecName: "kube-api-access-kjtpw") pod "ada73f5c-ac32-44b6-9af8-dbc560004935" (UID: "ada73f5c-ac32-44b6-9af8-dbc560004935"). InnerVolumeSpecName "kube-api-access-kjtpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.288373 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a354e546-d49a-4825-a255-ce8888c40e42-kube-api-access-27v4q" (OuterVolumeSpecName: "kube-api-access-27v4q") pod "a354e546-d49a-4825-a255-ce8888c40e42" (UID: "a354e546-d49a-4825-a255-ce8888c40e42"). InnerVolumeSpecName "kube-api-access-27v4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.366939 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ada73f5c-ac32-44b6-9af8-dbc560004935-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.366993 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27v4q\" (UniqueName: \"kubernetes.io/projected/a354e546-d49a-4825-a255-ce8888c40e42-kube-api-access-27v4q\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.367012 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a354e546-d49a-4825-a255-ce8888c40e42-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.367024 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjtpw\" (UniqueName: \"kubernetes.io/projected/ada73f5c-ac32-44b6-9af8-dbc560004935-kube-api-access-kjtpw\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.468930 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pltt2\" (UniqueName: \"kubernetes.io/projected/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-kube-api-access-pltt2\") pod \"0bb8ac16-ce77-4ce1-badf-2d4d610757f3\" (UID: \"0bb8ac16-ce77-4ce1-badf-2d4d610757f3\") " Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.469029 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-operator-scripts\") pod \"0bb8ac16-ce77-4ce1-badf-2d4d610757f3\" (UID: \"0bb8ac16-ce77-4ce1-badf-2d4d610757f3\") " Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.470231 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bb8ac16-ce77-4ce1-badf-2d4d610757f3" (UID: "0bb8ac16-ce77-4ce1-badf-2d4d610757f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.475566 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-kube-api-access-pltt2" (OuterVolumeSpecName: "kube-api-access-pltt2") pod "0bb8ac16-ce77-4ce1-badf-2d4d610757f3" (UID: "0bb8ac16-ce77-4ce1-badf-2d4d610757f3"). InnerVolumeSpecName "kube-api-access-pltt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.571678 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pltt2\" (UniqueName: \"kubernetes.io/projected/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-kube-api-access-pltt2\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.572174 4717 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bb8ac16-ce77-4ce1-badf-2d4d610757f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.604456 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3290-account-create-update-rg9fw" event={"ID":"ada73f5c-ac32-44b6-9af8-dbc560004935","Type":"ContainerDied","Data":"4ef9d1f34634cf60816e62d9ff2b56b54e6928358ff6c9180abd116ab4c76889"} Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.604521 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ef9d1f34634cf60816e62d9ff2b56b54e6928358ff6c9180abd116ab4c76889" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.604484 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3290-account-create-update-rg9fw" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.606819 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4","Type":"ContainerStarted","Data":"4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421"} Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.606917 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4","Type":"ContainerStarted","Data":"e595b186e1501ae8e160677ff9fe2e710cda9cce3c9bad597957b47daf2dca62"} Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.609229 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7188-account-create-update-vmgn4" event={"ID":"a354e546-d49a-4825-a255-ce8888c40e42","Type":"ContainerDied","Data":"4d9afa8dbc2cdd2ea5f416e9686e07ba92c0e3069d9947e3c0f92a71eb6f0980"} Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.609270 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7188-account-create-update-vmgn4" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.609305 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d9afa8dbc2cdd2ea5f416e9686e07ba92c0e3069d9947e3c0f92a71eb6f0980" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.611290 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0466-account-create-update-m65cg" event={"ID":"0bb8ac16-ce77-4ce1-badf-2d4d610757f3","Type":"ContainerDied","Data":"7a4e01442f642b0147f33fb7329527a803c32ffe767150cd82012b8a4514b946"} Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.611333 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a4e01442f642b0147f33fb7329527a803c32ffe767150cd82012b8a4514b946" Feb 18 12:08:43 crc kubenswrapper[4717]: I0218 12:08:43.611385 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0466-account-create-update-m65cg" Feb 18 12:08:44 crc kubenswrapper[4717]: I0218 12:08:44.627578 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4","Type":"ContainerStarted","Data":"81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c"} Feb 18 12:08:45 crc kubenswrapper[4717]: I0218 12:08:45.616445 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:45 crc kubenswrapper[4717]: I0218 12:08:45.661648 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4","Type":"ContainerStarted","Data":"d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12"} Feb 18 12:08:45 crc kubenswrapper[4717]: I0218 12:08:45.768589 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-765f565894-lj9d4" Feb 18 12:08:45 crc kubenswrapper[4717]: I0218 12:08:45.860675 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5f47d9d48-pwkwm"] Feb 18 12:08:45 crc kubenswrapper[4717]: I0218 12:08:45.861005 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5f47d9d48-pwkwm" podUID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" containerName="placement-log" containerID="cri-o://c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd" gracePeriod=30 Feb 18 12:08:45 crc kubenswrapper[4717]: I0218 12:08:45.861208 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5f47d9d48-pwkwm" podUID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" containerName="placement-api" containerID="cri-o://f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36" gracePeriod=30 Feb 18 12:08:46 crc kubenswrapper[4717]: I0218 12:08:46.674085 4717 generic.go:334] "Generic (PLEG): container finished" podID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" containerID="c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd" exitCode=143 Feb 18 12:08:46 crc kubenswrapper[4717]: I0218 12:08:46.674185 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f47d9d48-pwkwm" event={"ID":"49858184-0ffa-49c8-8fd5-5e3935eb70f9","Type":"ContainerDied","Data":"c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd"} Feb 18 12:08:47 crc kubenswrapper[4717]: I0218 12:08:47.686717 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4","Type":"ContainerStarted","Data":"dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6"} Feb 18 12:08:47 crc kubenswrapper[4717]: I0218 12:08:47.689566 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 12:08:47 crc kubenswrapper[4717]: I0218 12:08:47.721721 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.31221761 podStartE2EDuration="6.721695056s" podCreationTimestamp="2026-02-18 12:08:41 +0000 UTC" firstStartedPulling="2026-02-18 12:08:42.603902471 +0000 UTC m=+1157.006003787" lastFinishedPulling="2026-02-18 12:08:47.013379927 +0000 UTC m=+1161.415481233" observedRunningTime="2026-02-18 12:08:47.712724315 +0000 UTC m=+1162.114825631" watchObservedRunningTime="2026-02-18 12:08:47.721695056 +0000 UTC m=+1162.123796372" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.878339 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kk96s"] Feb 18 12:08:48 crc kubenswrapper[4717]: E0218 12:08:48.879372 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb8ac16-ce77-4ce1-badf-2d4d610757f3" containerName="mariadb-account-create-update" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879389 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb8ac16-ce77-4ce1-badf-2d4d610757f3" containerName="mariadb-account-create-update" Feb 18 12:08:48 crc kubenswrapper[4717]: E0218 12:08:48.879408 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ff3a55-d178-47f2-9d17-069494943080" containerName="mariadb-database-create" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879414 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ff3a55-d178-47f2-9d17-069494943080" containerName="mariadb-database-create" Feb 18 12:08:48 crc kubenswrapper[4717]: E0218 12:08:48.879427 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12eeb3e-8069-404c-ab57-9a182bd555e4" containerName="mariadb-database-create" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879434 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12eeb3e-8069-404c-ab57-9a182bd555e4" containerName="mariadb-database-create" Feb 18 12:08:48 crc kubenswrapper[4717]: E0218 12:08:48.879463 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe18f0e8-9d43-49ba-afd5-854b7540e855" containerName="mariadb-database-create" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879488 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe18f0e8-9d43-49ba-afd5-854b7540e855" containerName="mariadb-database-create" Feb 18 12:08:48 crc kubenswrapper[4717]: E0218 12:08:48.879496 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a354e546-d49a-4825-a255-ce8888c40e42" containerName="mariadb-account-create-update" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879503 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a354e546-d49a-4825-a255-ce8888c40e42" containerName="mariadb-account-create-update" Feb 18 12:08:48 crc kubenswrapper[4717]: E0218 12:08:48.879538 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ada73f5c-ac32-44b6-9af8-dbc560004935" containerName="mariadb-account-create-update" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879545 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ada73f5c-ac32-44b6-9af8-dbc560004935" containerName="mariadb-account-create-update" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879744 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb8ac16-ce77-4ce1-badf-2d4d610757f3" containerName="mariadb-account-create-update" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879764 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ff3a55-d178-47f2-9d17-069494943080" containerName="mariadb-database-create" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879777 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a354e546-d49a-4825-a255-ce8888c40e42" containerName="mariadb-account-create-update" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879793 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12eeb3e-8069-404c-ab57-9a182bd555e4" containerName="mariadb-database-create" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879804 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ada73f5c-ac32-44b6-9af8-dbc560004935" containerName="mariadb-account-create-update" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.879812 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe18f0e8-9d43-49ba-afd5-854b7540e855" containerName="mariadb-database-create" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.880566 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.883091 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.883188 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nbvvt" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.884054 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 12:08:48 crc kubenswrapper[4717]: I0218 12:08:48.901063 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kk96s"] Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.035132 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlnvd\" (UniqueName: \"kubernetes.io/projected/ba2e1766-bfe0-4a06-bb70-833b33300ec4-kube-api-access-qlnvd\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.035298 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-scripts\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.035539 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-config-data\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.035630 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.138401 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-config-data\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.139009 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.139738 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlnvd\" (UniqueName: \"kubernetes.io/projected/ba2e1766-bfe0-4a06-bb70-833b33300ec4-kube-api-access-qlnvd\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.139795 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-scripts\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.150516 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-scripts\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.151144 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-config-data\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.163091 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.163185 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlnvd\" (UniqueName: \"kubernetes.io/projected/ba2e1766-bfe0-4a06-bb70-833b33300ec4-kube-api-access-qlnvd\") pod \"nova-cell0-conductor-db-sync-kk96s\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.200102 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.508288 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.566680 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jp99\" (UniqueName: \"kubernetes.io/projected/49858184-0ffa-49c8-8fd5-5e3935eb70f9-kube-api-access-4jp99\") pod \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.566740 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-combined-ca-bundle\") pod \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.566766 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49858184-0ffa-49c8-8fd5-5e3935eb70f9-logs\") pod \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.566801 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-internal-tls-certs\") pod \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.570345 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49858184-0ffa-49c8-8fd5-5e3935eb70f9-logs" (OuterVolumeSpecName: "logs") pod "49858184-0ffa-49c8-8fd5-5e3935eb70f9" (UID: "49858184-0ffa-49c8-8fd5-5e3935eb70f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.602135 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49858184-0ffa-49c8-8fd5-5e3935eb70f9-kube-api-access-4jp99" (OuterVolumeSpecName: "kube-api-access-4jp99") pod "49858184-0ffa-49c8-8fd5-5e3935eb70f9" (UID: "49858184-0ffa-49c8-8fd5-5e3935eb70f9"). InnerVolumeSpecName "kube-api-access-4jp99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.617507 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kk96s"] Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.668973 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-scripts\") pod \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.669046 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-public-tls-certs\") pod \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.669079 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-config-data\") pod \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\" (UID: \"49858184-0ffa-49c8-8fd5-5e3935eb70f9\") " Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.669635 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jp99\" (UniqueName: \"kubernetes.io/projected/49858184-0ffa-49c8-8fd5-5e3935eb70f9-kube-api-access-4jp99\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.669656 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49858184-0ffa-49c8-8fd5-5e3935eb70f9-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.672890 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49858184-0ffa-49c8-8fd5-5e3935eb70f9" (UID: "49858184-0ffa-49c8-8fd5-5e3935eb70f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.681437 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-scripts" (OuterVolumeSpecName: "scripts") pod "49858184-0ffa-49c8-8fd5-5e3935eb70f9" (UID: "49858184-0ffa-49c8-8fd5-5e3935eb70f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.713514 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kk96s" event={"ID":"ba2e1766-bfe0-4a06-bb70-833b33300ec4","Type":"ContainerStarted","Data":"9b949dd61d98c2e31b5399023c69bf4ecbe828d04a4cd84e42c71c8d684c18ff"} Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.715533 4717 generic.go:334] "Generic (PLEG): container finished" podID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" containerID="f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36" exitCode=0 Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.717084 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f47d9d48-pwkwm" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.718111 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f47d9d48-pwkwm" event={"ID":"49858184-0ffa-49c8-8fd5-5e3935eb70f9","Type":"ContainerDied","Data":"f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36"} Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.718216 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f47d9d48-pwkwm" event={"ID":"49858184-0ffa-49c8-8fd5-5e3935eb70f9","Type":"ContainerDied","Data":"97895495b89f509f05e8c150508eff87181e286f1bba433376d5943acd952f01"} Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.718236 4717 scope.go:117] "RemoveContainer" containerID="f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.732628 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "49858184-0ffa-49c8-8fd5-5e3935eb70f9" (UID: "49858184-0ffa-49c8-8fd5-5e3935eb70f9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.750496 4717 scope.go:117] "RemoveContainer" containerID="c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.756912 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-config-data" (OuterVolumeSpecName: "config-data") pod "49858184-0ffa-49c8-8fd5-5e3935eb70f9" (UID: "49858184-0ffa-49c8-8fd5-5e3935eb70f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.771384 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.771496 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.771566 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.771588 4717 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.773346 4717 scope.go:117] "RemoveContainer" containerID="f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36" Feb 18 12:08:49 crc kubenswrapper[4717]: E0218 12:08:49.773759 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36\": container with ID starting with f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36 not found: ID does not exist" containerID="f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.773800 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36"} err="failed to get container status \"f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36\": rpc error: code = NotFound desc = could not find container \"f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36\": container with ID starting with f558af2f95131207fa7bd2d2215c90799ff2dae2975235e102954c6f59fe4b36 not found: ID does not exist" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.773823 4717 scope.go:117] "RemoveContainer" containerID="c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd" Feb 18 12:08:49 crc kubenswrapper[4717]: E0218 12:08:49.774075 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd\": container with ID starting with c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd not found: ID does not exist" containerID="c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.774124 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd"} err="failed to get container status \"c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd\": rpc error: code = NotFound desc = could not find container \"c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd\": container with ID starting with c48ad9ae248465e8c685562c251aeb10dac9e20863c4e020b8deef8309feeccd not found: ID does not exist" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.789315 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "49858184-0ffa-49c8-8fd5-5e3935eb70f9" (UID: "49858184-0ffa-49c8-8fd5-5e3935eb70f9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:49 crc kubenswrapper[4717]: I0218 12:08:49.873171 4717 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49858184-0ffa-49c8-8fd5-5e3935eb70f9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:50 crc kubenswrapper[4717]: I0218 12:08:50.057443 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5f47d9d48-pwkwm"] Feb 18 12:08:50 crc kubenswrapper[4717]: I0218 12:08:50.068693 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5f47d9d48-pwkwm"] Feb 18 12:08:51 crc kubenswrapper[4717]: I0218 12:08:51.074102 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" path="/var/lib/kubelet/pods/49858184-0ffa-49c8-8fd5-5e3935eb70f9/volumes" Feb 18 12:08:53 crc kubenswrapper[4717]: I0218 12:08:53.412307 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:53 crc kubenswrapper[4717]: I0218 12:08:53.413042 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="ceilometer-central-agent" containerID="cri-o://4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421" gracePeriod=30 Feb 18 12:08:53 crc kubenswrapper[4717]: I0218 12:08:53.413208 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="proxy-httpd" containerID="cri-o://dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6" gracePeriod=30 Feb 18 12:08:53 crc kubenswrapper[4717]: I0218 12:08:53.413277 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="sg-core" containerID="cri-o://d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12" gracePeriod=30 Feb 18 12:08:53 crc kubenswrapper[4717]: I0218 12:08:53.413317 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="ceilometer-notification-agent" containerID="cri-o://81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c" gracePeriod=30 Feb 18 12:08:53 crc kubenswrapper[4717]: I0218 12:08:53.766126 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4","Type":"ContainerDied","Data":"dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6"} Feb 18 12:08:53 crc kubenswrapper[4717]: I0218 12:08:53.766038 4717 generic.go:334] "Generic (PLEG): container finished" podID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerID="dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6" exitCode=0 Feb 18 12:08:53 crc kubenswrapper[4717]: I0218 12:08:53.766650 4717 generic.go:334] "Generic (PLEG): container finished" podID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerID="d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12" exitCode=2 Feb 18 12:08:53 crc kubenswrapper[4717]: I0218 12:08:53.766681 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4","Type":"ContainerDied","Data":"d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12"} Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.464047 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.552604 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.553227 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" containerName="glance-httpd" containerID="cri-o://c321c235cc5eab02f81494487ecb1900249c83c58dccbe824b356ba1a8b845f7" gracePeriod=30 Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.553513 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" containerName="glance-log" containerID="cri-o://e1a733b2aeb09679d3f67848a41e32376fb45ad684f0da257b6745765f88b5e4" gracePeriod=30 Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.594154 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd9ch\" (UniqueName: \"kubernetes.io/projected/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-kube-api-access-pd9ch\") pod \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.594435 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-scripts\") pod \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.594480 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-run-httpd\") pod \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.594536 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-combined-ca-bundle\") pod \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.594559 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-config-data\") pod \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.595244 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-sg-core-conf-yaml\") pod \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.595563 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-log-httpd\") pod \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\" (UID: \"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4\") " Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.596639 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" (UID: "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.597701 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" (UID: "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.612065 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-kube-api-access-pd9ch" (OuterVolumeSpecName: "kube-api-access-pd9ch") pod "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" (UID: "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4"). InnerVolumeSpecName "kube-api-access-pd9ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.616389 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-scripts" (OuterVolumeSpecName: "scripts") pod "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" (UID: "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.659269 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" (UID: "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.700180 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.700250 4717 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.701025 4717 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.701070 4717 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.701086 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd9ch\" (UniqueName: \"kubernetes.io/projected/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-kube-api-access-pd9ch\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.740376 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-config-data" (OuterVolumeSpecName: "config-data") pod "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" (UID: "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.747618 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" (UID: "abbd5506-76e8-4dff-b6e9-ecf39b0a17b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.782568 4717 generic.go:334] "Generic (PLEG): container finished" podID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" containerID="e1a733b2aeb09679d3f67848a41e32376fb45ad684f0da257b6745765f88b5e4" exitCode=143 Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.782886 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e76e98e5-47e8-4c4c-ab97-f37cad99c313","Type":"ContainerDied","Data":"e1a733b2aeb09679d3f67848a41e32376fb45ad684f0da257b6745765f88b5e4"} Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.791539 4717 generic.go:334] "Generic (PLEG): container finished" podID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerID="81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c" exitCode=0 Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.791820 4717 generic.go:334] "Generic (PLEG): container finished" podID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerID="4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421" exitCode=0 Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.791861 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.791763 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4","Type":"ContainerDied","Data":"81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c"} Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.794087 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4","Type":"ContainerDied","Data":"4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421"} Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.794102 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abbd5506-76e8-4dff-b6e9-ecf39b0a17b4","Type":"ContainerDied","Data":"e595b186e1501ae8e160677ff9fe2e710cda9cce3c9bad597957b47daf2dca62"} Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.794123 4717 scope.go:117] "RemoveContainer" containerID="dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.802973 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.803013 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.836889 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.847112 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.866522 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:54 crc kubenswrapper[4717]: E0218 12:08:54.867043 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" containerName="placement-api" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867069 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" containerName="placement-api" Feb 18 12:08:54 crc kubenswrapper[4717]: E0218 12:08:54.867087 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="ceilometer-central-agent" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867095 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="ceilometer-central-agent" Feb 18 12:08:54 crc kubenswrapper[4717]: E0218 12:08:54.867116 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" containerName="placement-log" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867123 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" containerName="placement-log" Feb 18 12:08:54 crc kubenswrapper[4717]: E0218 12:08:54.867142 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="sg-core" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867149 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="sg-core" Feb 18 12:08:54 crc kubenswrapper[4717]: E0218 12:08:54.867165 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="ceilometer-notification-agent" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867171 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="ceilometer-notification-agent" Feb 18 12:08:54 crc kubenswrapper[4717]: E0218 12:08:54.867188 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="proxy-httpd" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867194 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="proxy-httpd" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867379 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="ceilometer-central-agent" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867392 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="sg-core" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867403 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="proxy-httpd" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867412 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" containerName="ceilometer-notification-agent" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867421 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" containerName="placement-log" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.867434 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="49858184-0ffa-49c8-8fd5-5e3935eb70f9" containerName="placement-api" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.869096 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.874782 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.875800 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.899816 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.915636 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-log-httpd\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.915875 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-run-httpd\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.916084 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.916222 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmpk\" (UniqueName: \"kubernetes.io/projected/440fd121-17dd-4c30-b4a0-02d92a770cc6-kube-api-access-zjmpk\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.916346 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-scripts\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.918358 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-config-data\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:54 crc kubenswrapper[4717]: I0218 12:08:54.918478 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.021893 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-run-httpd\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.022169 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.022235 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjmpk\" (UniqueName: \"kubernetes.io/projected/440fd121-17dd-4c30-b4a0-02d92a770cc6-kube-api-access-zjmpk\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.022305 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-scripts\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.022355 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-config-data\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.022392 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.022430 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-log-httpd\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.022640 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-run-httpd\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.023230 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-log-httpd\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.029320 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.032387 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.032887 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-scripts\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.040972 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-config-data\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.054618 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjmpk\" (UniqueName: \"kubernetes.io/projected/440fd121-17dd-4c30-b4a0-02d92a770cc6-kube-api-access-zjmpk\") pod \"ceilometer-0\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.059904 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abbd5506-76e8-4dff-b6e9-ecf39b0a17b4" path="/var/lib/kubelet/pods/abbd5506-76e8-4dff-b6e9-ecf39b0a17b4/volumes" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.203125 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:08:55 crc kubenswrapper[4717]: I0218 12:08:55.598744 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:08:57 crc kubenswrapper[4717]: I0218 12:08:57.858926 4717 generic.go:334] "Generic (PLEG): container finished" podID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" containerID="c321c235cc5eab02f81494487ecb1900249c83c58dccbe824b356ba1a8b845f7" exitCode=0 Feb 18 12:08:57 crc kubenswrapper[4717]: I0218 12:08:57.859352 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e76e98e5-47e8-4c4c-ab97-f37cad99c313","Type":"ContainerDied","Data":"c321c235cc5eab02f81494487ecb1900249c83c58dccbe824b356ba1a8b845f7"} Feb 18 12:08:59 crc kubenswrapper[4717]: I0218 12:08:59.998387 4717 scope.go:117] "RemoveContainer" containerID="d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.111516 4717 scope.go:117] "RemoveContainer" containerID="81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.284680 4717 scope.go:117] "RemoveContainer" containerID="4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.308429 4717 scope.go:117] "RemoveContainer" containerID="dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6" Feb 18 12:09:00 crc kubenswrapper[4717]: E0218 12:09:00.309091 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6\": container with ID starting with dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6 not found: ID does not exist" containerID="dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.309160 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6"} err="failed to get container status \"dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6\": rpc error: code = NotFound desc = could not find container \"dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6\": container with ID starting with dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6 not found: ID does not exist" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.309204 4717 scope.go:117] "RemoveContainer" containerID="d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12" Feb 18 12:09:00 crc kubenswrapper[4717]: E0218 12:09:00.309625 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12\": container with ID starting with d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12 not found: ID does not exist" containerID="d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.309647 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12"} err="failed to get container status \"d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12\": rpc error: code = NotFound desc = could not find container \"d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12\": container with ID starting with d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12 not found: ID does not exist" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.309664 4717 scope.go:117] "RemoveContainer" containerID="81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c" Feb 18 12:09:00 crc kubenswrapper[4717]: E0218 12:09:00.310129 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c\": container with ID starting with 81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c not found: ID does not exist" containerID="81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.310194 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c"} err="failed to get container status \"81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c\": rpc error: code = NotFound desc = could not find container \"81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c\": container with ID starting with 81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c not found: ID does not exist" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.310250 4717 scope.go:117] "RemoveContainer" containerID="4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421" Feb 18 12:09:00 crc kubenswrapper[4717]: E0218 12:09:00.310681 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421\": container with ID starting with 4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421 not found: ID does not exist" containerID="4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.310714 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421"} err="failed to get container status \"4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421\": rpc error: code = NotFound desc = could not find container \"4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421\": container with ID starting with 4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421 not found: ID does not exist" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.310739 4717 scope.go:117] "RemoveContainer" containerID="dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.311077 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6"} err="failed to get container status \"dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6\": rpc error: code = NotFound desc = could not find container \"dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6\": container with ID starting with dd3236985b1f610cd89aa9395c8acf557d561afce72a1a0d902c692705fbd5f6 not found: ID does not exist" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.311111 4717 scope.go:117] "RemoveContainer" containerID="d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.311475 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12"} err="failed to get container status \"d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12\": rpc error: code = NotFound desc = could not find container \"d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12\": container with ID starting with d6e49c593472802a0a9c93d476fb3019c74a6e49e0ffb0e8d19f75acaa43fb12 not found: ID does not exist" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.311498 4717 scope.go:117] "RemoveContainer" containerID="81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.311967 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c"} err="failed to get container status \"81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c\": rpc error: code = NotFound desc = could not find container \"81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c\": container with ID starting with 81dc52a75613842392c53ca4a1f4cde4082daf27c295065a686f7619669e835c not found: ID does not exist" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.311996 4717 scope.go:117] "RemoveContainer" containerID="4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.312444 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421"} err="failed to get container status \"4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421\": rpc error: code = NotFound desc = could not find container \"4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421\": container with ID starting with 4d6d936358fde17d5707015513543d87926f27bb11af8cc73ac57dc6e7bcc421 not found: ID does not exist" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.578647 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.673315 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkjwt\" (UniqueName: \"kubernetes.io/projected/e76e98e5-47e8-4c4c-ab97-f37cad99c313-kube-api-access-tkjwt\") pod \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.673376 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-config-data\") pod \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.673450 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.673487 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-httpd-run\") pod \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.673529 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-scripts\") pod \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.673603 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-internal-tls-certs\") pod \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.673621 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-combined-ca-bundle\") pod \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.673667 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-logs\") pod \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\" (UID: \"e76e98e5-47e8-4c4c-ab97-f37cad99c313\") " Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.674116 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e76e98e5-47e8-4c4c-ab97-f37cad99c313" (UID: "e76e98e5-47e8-4c4c-ab97-f37cad99c313"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.674465 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-logs" (OuterVolumeSpecName: "logs") pod "e76e98e5-47e8-4c4c-ab97-f37cad99c313" (UID: "e76e98e5-47e8-4c4c-ab97-f37cad99c313"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.682471 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e76e98e5-47e8-4c4c-ab97-f37cad99c313-kube-api-access-tkjwt" (OuterVolumeSpecName: "kube-api-access-tkjwt") pod "e76e98e5-47e8-4c4c-ab97-f37cad99c313" (UID: "e76e98e5-47e8-4c4c-ab97-f37cad99c313"). InnerVolumeSpecName "kube-api-access-tkjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.684368 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-scripts" (OuterVolumeSpecName: "scripts") pod "e76e98e5-47e8-4c4c-ab97-f37cad99c313" (UID: "e76e98e5-47e8-4c4c-ab97-f37cad99c313"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.690412 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "e76e98e5-47e8-4c4c-ab97-f37cad99c313" (UID: "e76e98e5-47e8-4c4c-ab97-f37cad99c313"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.733971 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.740867 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e76e98e5-47e8-4c4c-ab97-f37cad99c313" (UID: "e76e98e5-47e8-4c4c-ab97-f37cad99c313"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.751398 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-config-data" (OuterVolumeSpecName: "config-data") pod "e76e98e5-47e8-4c4c-ab97-f37cad99c313" (UID: "e76e98e5-47e8-4c4c-ab97-f37cad99c313"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.776641 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkjwt\" (UniqueName: \"kubernetes.io/projected/e76e98e5-47e8-4c4c-ab97-f37cad99c313-kube-api-access-tkjwt\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.776681 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.776722 4717 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.776735 4717 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.776746 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.776756 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.776766 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76e98e5-47e8-4c4c-ab97-f37cad99c313-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.784480 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e76e98e5-47e8-4c4c-ab97-f37cad99c313" (UID: "e76e98e5-47e8-4c4c-ab97-f37cad99c313"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.799527 4717 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.878896 4717 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.878947 4717 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e76e98e5-47e8-4c4c-ab97-f37cad99c313-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.893874 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440fd121-17dd-4c30-b4a0-02d92a770cc6","Type":"ContainerStarted","Data":"57c7b84baf1597116ecc4df85baed46d234fdbfe99d264efe9c6ebe2e575dfe3"} Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.896559 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kk96s" event={"ID":"ba2e1766-bfe0-4a06-bb70-833b33300ec4","Type":"ContainerStarted","Data":"82bfe9414e2287220f17d75be09b6c40e8a4945cc2f2ddfd9dedaedac4c7c258"} Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.900406 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e76e98e5-47e8-4c4c-ab97-f37cad99c313","Type":"ContainerDied","Data":"d55fc7cd2ea015d827696d45b32d7d3307289d9171f7b1a71f521f894497ba10"} Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.900450 4717 scope.go:117] "RemoveContainer" containerID="c321c235cc5eab02f81494487ecb1900249c83c58dccbe824b356ba1a8b845f7" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.900568 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.933677 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kk96s" podStartSLOduration=2.429012431 podStartE2EDuration="12.933641845s" podCreationTimestamp="2026-02-18 12:08:48 +0000 UTC" firstStartedPulling="2026-02-18 12:08:49.62877749 +0000 UTC m=+1164.030878806" lastFinishedPulling="2026-02-18 12:09:00.133406904 +0000 UTC m=+1174.535508220" observedRunningTime="2026-02-18 12:09:00.915773449 +0000 UTC m=+1175.317874765" watchObservedRunningTime="2026-02-18 12:09:00.933641845 +0000 UTC m=+1175.335743161" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.934017 4717 scope.go:117] "RemoveContainer" containerID="e1a733b2aeb09679d3f67848a41e32376fb45ad684f0da257b6745765f88b5e4" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.958547 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.974335 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.990400 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:09:00 crc kubenswrapper[4717]: E0218 12:09:00.990945 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" containerName="glance-log" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.990961 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" containerName="glance-log" Feb 18 12:09:00 crc kubenswrapper[4717]: E0218 12:09:00.991011 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" containerName="glance-httpd" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.991018 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" containerName="glance-httpd" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.991196 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" containerName="glance-log" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.991224 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" containerName="glance-httpd" Feb 18 12:09:00 crc kubenswrapper[4717]: I0218 12:09:00.992391 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.002059 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.011726 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.011908 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.057278 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e76e98e5-47e8-4c4c-ab97-f37cad99c313" path="/var/lib/kubelet/pods/e76e98e5-47e8-4c4c-ab97-f37cad99c313/volumes" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.192110 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.192651 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.192740 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71fd55dc-beb4-4d07-af77-f244d5b1d399-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.192759 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnjf6\" (UniqueName: \"kubernetes.io/projected/71fd55dc-beb4-4d07-af77-f244d5b1d399-kube-api-access-nnjf6\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.192798 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71fd55dc-beb4-4d07-af77-f244d5b1d399-logs\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.192817 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.192838 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.192860 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.296873 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.297730 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.298113 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.298283 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71fd55dc-beb4-4d07-af77-f244d5b1d399-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.298321 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnjf6\" (UniqueName: \"kubernetes.io/projected/71fd55dc-beb4-4d07-af77-f244d5b1d399-kube-api-access-nnjf6\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.298388 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71fd55dc-beb4-4d07-af77-f244d5b1d399-logs\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.298874 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.298981 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.299248 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71fd55dc-beb4-4d07-af77-f244d5b1d399-logs\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.299244 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71fd55dc-beb4-4d07-af77-f244d5b1d399-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.303353 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-scripts\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.303545 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-config-data\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.304277 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.305239 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fd55dc-beb4-4d07-af77-f244d5b1d399-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.305321 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.318315 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnjf6\" (UniqueName: \"kubernetes.io/projected/71fd55dc-beb4-4d07-af77-f244d5b1d399-kube-api-access-nnjf6\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.348111 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"71fd55dc-beb4-4d07-af77-f244d5b1d399\") " pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.400095 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.918892 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440fd121-17dd-4c30-b4a0-02d92a770cc6","Type":"ContainerStarted","Data":"b8a67ed32b9c9a1e5e6bf9dbf3004369662b29dff055e6c1b4a91a6e3f5255f4"} Feb 18 12:09:01 crc kubenswrapper[4717]: I0218 12:09:01.975442 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 12:09:01 crc kubenswrapper[4717]: W0218 12:09:01.986075 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71fd55dc_beb4_4d07_af77_f244d5b1d399.slice/crio-ba6907d0a49dd80eacd6f022c7b8717934ceabeeeddb7b7f6ad46cb9034ce179 WatchSource:0}: Error finding container ba6907d0a49dd80eacd6f022c7b8717934ceabeeeddb7b7f6ad46cb9034ce179: Status 404 returned error can't find the container with id ba6907d0a49dd80eacd6f022c7b8717934ceabeeeddb7b7f6ad46cb9034ce179 Feb 18 12:09:02 crc kubenswrapper[4717]: I0218 12:09:02.935446 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71fd55dc-beb4-4d07-af77-f244d5b1d399","Type":"ContainerStarted","Data":"2508cf6130a455a6c18a2393791ce92db19749f5c20e085e3feeecc513f1ef80"} Feb 18 12:09:02 crc kubenswrapper[4717]: I0218 12:09:02.936528 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71fd55dc-beb4-4d07-af77-f244d5b1d399","Type":"ContainerStarted","Data":"ba6907d0a49dd80eacd6f022c7b8717934ceabeeeddb7b7f6ad46cb9034ce179"} Feb 18 12:09:02 crc kubenswrapper[4717]: I0218 12:09:02.940072 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440fd121-17dd-4c30-b4a0-02d92a770cc6","Type":"ContainerStarted","Data":"a118cbfa60d46c307253acf8d2bae59e6bbe0a4e16783efea2aad888bdca1eb3"} Feb 18 12:09:03 crc kubenswrapper[4717]: I0218 12:09:03.404001 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:09:03 crc kubenswrapper[4717]: I0218 12:09:03.404818 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" containerName="glance-httpd" containerID="cri-o://27637b5563867a9208732eeb4ddb23a11c2ced41d1ca0b40403e2916f1d24c85" gracePeriod=30 Feb 18 12:09:03 crc kubenswrapper[4717]: I0218 12:09:03.405213 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" containerName="glance-log" containerID="cri-o://f774765cb2ed66c85abf2ebba93415d2a5e3b43cc6860e0646890a4d35442b7d" gracePeriod=30 Feb 18 12:09:03 crc kubenswrapper[4717]: I0218 12:09:03.957684 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"71fd55dc-beb4-4d07-af77-f244d5b1d399","Type":"ContainerStarted","Data":"2227f3eaf40b486694b14f9582b5a3ad0f1d04a9b02fd0dbd898ac7acaa4c816"} Feb 18 12:09:03 crc kubenswrapper[4717]: I0218 12:09:03.962837 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440fd121-17dd-4c30-b4a0-02d92a770cc6","Type":"ContainerStarted","Data":"c2b93979a79cef1d6c55231f3c20239ff72a219e06a87a149cfeb78bf055612f"} Feb 18 12:09:03 crc kubenswrapper[4717]: I0218 12:09:03.966410 4717 generic.go:334] "Generic (PLEG): container finished" podID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" containerID="f774765cb2ed66c85abf2ebba93415d2a5e3b43cc6860e0646890a4d35442b7d" exitCode=143 Feb 18 12:09:03 crc kubenswrapper[4717]: I0218 12:09:03.966485 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b","Type":"ContainerDied","Data":"f774765cb2ed66c85abf2ebba93415d2a5e3b43cc6860e0646890a4d35442b7d"} Feb 18 12:09:03 crc kubenswrapper[4717]: I0218 12:09:03.981047 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.981024121 podStartE2EDuration="3.981024121s" podCreationTimestamp="2026-02-18 12:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:03.977395427 +0000 UTC m=+1178.379496733" watchObservedRunningTime="2026-02-18 12:09:03.981024121 +0000 UTC m=+1178.383125427" Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.017727 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440fd121-17dd-4c30-b4a0-02d92a770cc6","Type":"ContainerStarted","Data":"4287ae6116d1527d3a550020cf0702e8dec9159b7dd738cff2f7a4957f15877d"} Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.018745 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.018145 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="ceilometer-notification-agent" containerID="cri-o://a118cbfa60d46c307253acf8d2bae59e6bbe0a4e16783efea2aad888bdca1eb3" gracePeriod=30 Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.018111 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="sg-core" containerID="cri-o://c2b93979a79cef1d6c55231f3c20239ff72a219e06a87a149cfeb78bf055612f" gracePeriod=30 Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.018236 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="ceilometer-central-agent" containerID="cri-o://b8a67ed32b9c9a1e5e6bf9dbf3004369662b29dff055e6c1b4a91a6e3f5255f4" gracePeriod=30 Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.018157 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="proxy-httpd" containerID="cri-o://4287ae6116d1527d3a550020cf0702e8dec9159b7dd738cff2f7a4957f15877d" gracePeriod=30 Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.055737 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.026678781 podStartE2EDuration="12.055708478s" podCreationTimestamp="2026-02-18 12:08:54 +0000 UTC" firstStartedPulling="2026-02-18 12:09:00.740443275 +0000 UTC m=+1175.142544591" lastFinishedPulling="2026-02-18 12:09:04.769472972 +0000 UTC m=+1179.171574288" observedRunningTime="2026-02-18 12:09:06.050875558 +0000 UTC m=+1180.452976894" watchObservedRunningTime="2026-02-18 12:09:06.055708478 +0000 UTC m=+1180.457809784" Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.831339 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.934242 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-secret-key\") pod \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.934482 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-tls-certs\") pod \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.934530 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txzxr\" (UniqueName: \"kubernetes.io/projected/619d0b9d-837a-4790-88cd-d2e11c6da6fc-kube-api-access-txzxr\") pod \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.934588 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-config-data\") pod \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.934647 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-scripts\") pod \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.934806 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/619d0b9d-837a-4790-88cd-d2e11c6da6fc-logs\") pod \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.934870 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-combined-ca-bundle\") pod \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\" (UID: \"619d0b9d-837a-4790-88cd-d2e11c6da6fc\") " Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.938643 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/619d0b9d-837a-4790-88cd-d2e11c6da6fc-logs" (OuterVolumeSpecName: "logs") pod "619d0b9d-837a-4790-88cd-d2e11c6da6fc" (UID: "619d0b9d-837a-4790-88cd-d2e11c6da6fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.944142 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "619d0b9d-837a-4790-88cd-d2e11c6da6fc" (UID: "619d0b9d-837a-4790-88cd-d2e11c6da6fc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.948148 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619d0b9d-837a-4790-88cd-d2e11c6da6fc-kube-api-access-txzxr" (OuterVolumeSpecName: "kube-api-access-txzxr") pod "619d0b9d-837a-4790-88cd-d2e11c6da6fc" (UID: "619d0b9d-837a-4790-88cd-d2e11c6da6fc"). InnerVolumeSpecName "kube-api-access-txzxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.964038 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-scripts" (OuterVolumeSpecName: "scripts") pod "619d0b9d-837a-4790-88cd-d2e11c6da6fc" (UID: "619d0b9d-837a-4790-88cd-d2e11c6da6fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.964837 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-config-data" (OuterVolumeSpecName: "config-data") pod "619d0b9d-837a-4790-88cd-d2e11c6da6fc" (UID: "619d0b9d-837a-4790-88cd-d2e11c6da6fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:09:06 crc kubenswrapper[4717]: I0218 12:09:06.973242 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "619d0b9d-837a-4790-88cd-d2e11c6da6fc" (UID: "619d0b9d-837a-4790-88cd-d2e11c6da6fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.015518 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "619d0b9d-837a-4790-88cd-d2e11c6da6fc" (UID: "619d0b9d-837a-4790-88cd-d2e11c6da6fc"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.041066 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/619d0b9d-837a-4790-88cd-d2e11c6da6fc-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.041111 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.041128 4717 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.041142 4717 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/619d0b9d-837a-4790-88cd-d2e11c6da6fc-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.041154 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txzxr\" (UniqueName: \"kubernetes.io/projected/619d0b9d-837a-4790-88cd-d2e11c6da6fc-kube-api-access-txzxr\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.041169 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.041182 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/619d0b9d-837a-4790-88cd-d2e11c6da6fc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.042886 4717 generic.go:334] "Generic (PLEG): container finished" podID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerID="2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2" exitCode=137 Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.042930 4717 generic.go:334] "Generic (PLEG): container finished" podID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerID="ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494" exitCode=137 Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.049653 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b5f4c76fb-t68w8" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.050559 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b5f4c76fb-t68w8" event={"ID":"619d0b9d-837a-4790-88cd-d2e11c6da6fc","Type":"ContainerDied","Data":"2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2"} Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.050604 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b5f4c76fb-t68w8" event={"ID":"619d0b9d-837a-4790-88cd-d2e11c6da6fc","Type":"ContainerDied","Data":"ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494"} Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.050624 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b5f4c76fb-t68w8" event={"ID":"619d0b9d-837a-4790-88cd-d2e11c6da6fc","Type":"ContainerDied","Data":"6650069fbcd3232ee79dd5dea3be34df2b7da25e88c05b904fccdb394159f61f"} Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.050650 4717 scope.go:117] "RemoveContainer" containerID="2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.058523 4717 generic.go:334] "Generic (PLEG): container finished" podID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerID="4287ae6116d1527d3a550020cf0702e8dec9159b7dd738cff2f7a4957f15877d" exitCode=0 Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.058871 4717 generic.go:334] "Generic (PLEG): container finished" podID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerID="c2b93979a79cef1d6c55231f3c20239ff72a219e06a87a149cfeb78bf055612f" exitCode=2 Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.058881 4717 generic.go:334] "Generic (PLEG): container finished" podID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerID="a118cbfa60d46c307253acf8d2bae59e6bbe0a4e16783efea2aad888bdca1eb3" exitCode=0 Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.058593 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440fd121-17dd-4c30-b4a0-02d92a770cc6","Type":"ContainerDied","Data":"4287ae6116d1527d3a550020cf0702e8dec9159b7dd738cff2f7a4957f15877d"} Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.058968 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440fd121-17dd-4c30-b4a0-02d92a770cc6","Type":"ContainerDied","Data":"c2b93979a79cef1d6c55231f3c20239ff72a219e06a87a149cfeb78bf055612f"} Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.058985 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440fd121-17dd-4c30-b4a0-02d92a770cc6","Type":"ContainerDied","Data":"a118cbfa60d46c307253acf8d2bae59e6bbe0a4e16783efea2aad888bdca1eb3"} Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.062405 4717 generic.go:334] "Generic (PLEG): container finished" podID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" containerID="27637b5563867a9208732eeb4ddb23a11c2ced41d1ca0b40403e2916f1d24c85" exitCode=0 Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.062470 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b","Type":"ContainerDied","Data":"27637b5563867a9208732eeb4ddb23a11c2ced41d1ca0b40403e2916f1d24c85"} Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.108461 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b5f4c76fb-t68w8"] Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.118962 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b5f4c76fb-t68w8"] Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.261698 4717 scope.go:117] "RemoveContainer" containerID="0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.393773 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.455025 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-public-tls-certs\") pod \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.455141 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-scripts\") pod \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.455316 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-httpd-run\") pod \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.455343 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-combined-ca-bundle\") pod \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.455366 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-config-data\") pod \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.455412 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcht8\" (UniqueName: \"kubernetes.io/projected/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-kube-api-access-vcht8\") pod \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.455434 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-logs\") pod \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.455459 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\" (UID: \"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b\") " Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.459329 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" (UID: "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.460351 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-logs" (OuterVolumeSpecName: "logs") pod "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" (UID: "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.466039 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-scripts" (OuterVolumeSpecName: "scripts") pod "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" (UID: "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.466143 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" (UID: "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.466309 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-kube-api-access-vcht8" (OuterVolumeSpecName: "kube-api-access-vcht8") pod "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" (UID: "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b"). InnerVolumeSpecName "kube-api-access-vcht8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.503981 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" (UID: "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.538196 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" (UID: "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.542891 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-config-data" (OuterVolumeSpecName: "config-data") pod "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" (UID: "f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.558449 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcht8\" (UniqueName: \"kubernetes.io/projected/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-kube-api-access-vcht8\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.558498 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.558543 4717 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.558555 4717 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.558564 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.558573 4717 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.558580 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.558589 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.573745 4717 scope.go:117] "RemoveContainer" containerID="ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.599459 4717 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.618158 4717 scope.go:117] "RemoveContainer" containerID="2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2" Feb 18 12:09:07 crc kubenswrapper[4717]: E0218 12:09:07.618800 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2\": container with ID starting with 2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2 not found: ID does not exist" containerID="2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.618865 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2"} err="failed to get container status \"2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2\": rpc error: code = NotFound desc = could not find container \"2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2\": container with ID starting with 2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2 not found: ID does not exist" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.618918 4717 scope.go:117] "RemoveContainer" containerID="0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce" Feb 18 12:09:07 crc kubenswrapper[4717]: E0218 12:09:07.619508 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce\": container with ID starting with 0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce not found: ID does not exist" containerID="0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.619691 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce"} err="failed to get container status \"0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce\": rpc error: code = NotFound desc = could not find container \"0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce\": container with ID starting with 0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce not found: ID does not exist" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.619797 4717 scope.go:117] "RemoveContainer" containerID="ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494" Feb 18 12:09:07 crc kubenswrapper[4717]: E0218 12:09:07.620289 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494\": container with ID starting with ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494 not found: ID does not exist" containerID="ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.620340 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494"} err="failed to get container status \"ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494\": rpc error: code = NotFound desc = could not find container \"ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494\": container with ID starting with ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494 not found: ID does not exist" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.620367 4717 scope.go:117] "RemoveContainer" containerID="2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.620775 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2"} err="failed to get container status \"2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2\": rpc error: code = NotFound desc = could not find container \"2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2\": container with ID starting with 2e3e5309e3455261b8e2ab3ce759ab98c5e596ab49ee77e57460fa68a74520c2 not found: ID does not exist" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.620869 4717 scope.go:117] "RemoveContainer" containerID="0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.621226 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce"} err="failed to get container status \"0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce\": rpc error: code = NotFound desc = could not find container \"0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce\": container with ID starting with 0da9cbd9d2f8f09433e7dbfd5b228930dd6a8ac5eac0ae26173372f2a83648ce not found: ID does not exist" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.621252 4717 scope.go:117] "RemoveContainer" containerID="ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.621515 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494"} err="failed to get container status \"ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494\": rpc error: code = NotFound desc = could not find container \"ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494\": container with ID starting with ee3c4a87af28824a451f873eab435796562d5d4209ab02eaa3bd4ad237ccb494 not found: ID does not exist" Feb 18 12:09:07 crc kubenswrapper[4717]: I0218 12:09:07.660905 4717 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.083148 4717 generic.go:334] "Generic (PLEG): container finished" podID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerID="b8a67ed32b9c9a1e5e6bf9dbf3004369662b29dff055e6c1b4a91a6e3f5255f4" exitCode=0 Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.083252 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440fd121-17dd-4c30-b4a0-02d92a770cc6","Type":"ContainerDied","Data":"b8a67ed32b9c9a1e5e6bf9dbf3004369662b29dff055e6c1b4a91a6e3f5255f4"} Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.087212 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b","Type":"ContainerDied","Data":"85717296d76cd25f87031dff482ad72b0204ae13894f2c191708452800f0ca27"} Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.087311 4717 scope.go:117] "RemoveContainer" containerID="27637b5563867a9208732eeb4ddb23a11c2ced41d1ca0b40403e2916f1d24c85" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.087457 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.119602 4717 scope.go:117] "RemoveContainer" containerID="f774765cb2ed66c85abf2ebba93415d2a5e3b43cc6860e0646890a4d35442b7d" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.142358 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.156939 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.167358 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:09:08 crc kubenswrapper[4717]: E0218 12:09:08.167906 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon-log" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.167960 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon-log" Feb 18 12:09:08 crc kubenswrapper[4717]: E0218 12:09:08.167990 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" containerName="glance-log" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.167997 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" containerName="glance-log" Feb 18 12:09:08 crc kubenswrapper[4717]: E0218 12:09:08.168011 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.168018 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" Feb 18 12:09:08 crc kubenswrapper[4717]: E0218 12:09:08.168030 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" containerName="glance-httpd" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.168036 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" containerName="glance-httpd" Feb 18 12:09:08 crc kubenswrapper[4717]: E0218 12:09:08.168053 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.168060 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.172623 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.172680 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.172701 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" containerName="horizon-log" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.172715 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" containerName="glance-log" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.172740 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" containerName="glance-httpd" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.174248 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.180469 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.180676 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.181317 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.278806 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.278886 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-scripts\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.278981 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9rpl\" (UniqueName: \"kubernetes.io/projected/05e71379-15cf-4f83-a548-a46ba29caada-kube-api-access-t9rpl\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.279031 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e71379-15cf-4f83-a548-a46ba29caada-logs\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.279095 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05e71379-15cf-4f83-a548-a46ba29caada-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.279139 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.279187 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.279227 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-config-data\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.381777 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.381844 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-scripts\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.381893 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9rpl\" (UniqueName: \"kubernetes.io/projected/05e71379-15cf-4f83-a548-a46ba29caada-kube-api-access-t9rpl\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.381942 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e71379-15cf-4f83-a548-a46ba29caada-logs\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.382017 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05e71379-15cf-4f83-a548-a46ba29caada-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.382073 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.382118 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.382156 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-config-data\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.382574 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.383114 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05e71379-15cf-4f83-a548-a46ba29caada-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.384444 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05e71379-15cf-4f83-a548-a46ba29caada-logs\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.387627 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.389222 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-config-data\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.392188 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-scripts\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.394352 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05e71379-15cf-4f83-a548-a46ba29caada-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.406559 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9rpl\" (UniqueName: \"kubernetes.io/projected/05e71379-15cf-4f83-a548-a46ba29caada-kube-api-access-t9rpl\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.422816 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"05e71379-15cf-4f83-a548-a46ba29caada\") " pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.498513 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.514724 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.585122 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjmpk\" (UniqueName: \"kubernetes.io/projected/440fd121-17dd-4c30-b4a0-02d92a770cc6-kube-api-access-zjmpk\") pod \"440fd121-17dd-4c30-b4a0-02d92a770cc6\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.586018 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-config-data\") pod \"440fd121-17dd-4c30-b4a0-02d92a770cc6\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.586320 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-sg-core-conf-yaml\") pod \"440fd121-17dd-4c30-b4a0-02d92a770cc6\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.586789 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-combined-ca-bundle\") pod \"440fd121-17dd-4c30-b4a0-02d92a770cc6\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.587571 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-log-httpd\") pod \"440fd121-17dd-4c30-b4a0-02d92a770cc6\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.587865 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-run-httpd\") pod \"440fd121-17dd-4c30-b4a0-02d92a770cc6\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.588008 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-scripts\") pod \"440fd121-17dd-4c30-b4a0-02d92a770cc6\" (UID: \"440fd121-17dd-4c30-b4a0-02d92a770cc6\") " Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.588175 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "440fd121-17dd-4c30-b4a0-02d92a770cc6" (UID: "440fd121-17dd-4c30-b4a0-02d92a770cc6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.588339 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "440fd121-17dd-4c30-b4a0-02d92a770cc6" (UID: "440fd121-17dd-4c30-b4a0-02d92a770cc6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.589920 4717 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.590017 4717 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/440fd121-17dd-4c30-b4a0-02d92a770cc6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.594003 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-scripts" (OuterVolumeSpecName: "scripts") pod "440fd121-17dd-4c30-b4a0-02d92a770cc6" (UID: "440fd121-17dd-4c30-b4a0-02d92a770cc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.595555 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440fd121-17dd-4c30-b4a0-02d92a770cc6-kube-api-access-zjmpk" (OuterVolumeSpecName: "kube-api-access-zjmpk") pod "440fd121-17dd-4c30-b4a0-02d92a770cc6" (UID: "440fd121-17dd-4c30-b4a0-02d92a770cc6"). InnerVolumeSpecName "kube-api-access-zjmpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.657489 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "440fd121-17dd-4c30-b4a0-02d92a770cc6" (UID: "440fd121-17dd-4c30-b4a0-02d92a770cc6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.693400 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.693438 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjmpk\" (UniqueName: \"kubernetes.io/projected/440fd121-17dd-4c30-b4a0-02d92a770cc6-kube-api-access-zjmpk\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.693450 4717 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.710932 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "440fd121-17dd-4c30-b4a0-02d92a770cc6" (UID: "440fd121-17dd-4c30-b4a0-02d92a770cc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.727759 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-config-data" (OuterVolumeSpecName: "config-data") pod "440fd121-17dd-4c30-b4a0-02d92a770cc6" (UID: "440fd121-17dd-4c30-b4a0-02d92a770cc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.796989 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:08 crc kubenswrapper[4717]: I0218 12:09:08.797039 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440fd121-17dd-4c30-b4a0-02d92a770cc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.051057 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619d0b9d-837a-4790-88cd-d2e11c6da6fc" path="/var/lib/kubelet/pods/619d0b9d-837a-4790-88cd-d2e11c6da6fc/volumes" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.052066 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b" path="/var/lib/kubelet/pods/f1caebbe-d33a-4fd3-9f9e-c1a2c334f79b/volumes" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.100695 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"440fd121-17dd-4c30-b4a0-02d92a770cc6","Type":"ContainerDied","Data":"57c7b84baf1597116ecc4df85baed46d234fdbfe99d264efe9c6ebe2e575dfe3"} Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.100738 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.100772 4717 scope.go:117] "RemoveContainer" containerID="4287ae6116d1527d3a550020cf0702e8dec9159b7dd738cff2f7a4957f15877d" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.142309 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.152534 4717 scope.go:117] "RemoveContainer" containerID="c2b93979a79cef1d6c55231f3c20239ff72a219e06a87a149cfeb78bf055612f" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.175267 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.222160 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.239543 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:09 crc kubenswrapper[4717]: E0218 12:09:09.240148 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="ceilometer-central-agent" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.240172 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="ceilometer-central-agent" Feb 18 12:09:09 crc kubenswrapper[4717]: E0218 12:09:09.240192 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="proxy-httpd" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.240202 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="proxy-httpd" Feb 18 12:09:09 crc kubenswrapper[4717]: E0218 12:09:09.240217 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="sg-core" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.240224 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="sg-core" Feb 18 12:09:09 crc kubenswrapper[4717]: E0218 12:09:09.240234 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="ceilometer-notification-agent" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.240243 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="ceilometer-notification-agent" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.240555 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="ceilometer-notification-agent" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.240580 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="sg-core" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.240609 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="proxy-httpd" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.240626 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" containerName="ceilometer-central-agent" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.245182 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.252829 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.258320 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.258683 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.280410 4717 scope.go:117] "RemoveContainer" containerID="a118cbfa60d46c307253acf8d2bae59e6bbe0a4e16783efea2aad888bdca1eb3" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.309236 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-config-data\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.309784 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrd2\" (UniqueName: \"kubernetes.io/projected/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-kube-api-access-fkrd2\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.310049 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-run-httpd\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.310158 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-log-httpd\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.310346 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.310405 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.310476 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-scripts\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.319734 4717 scope.go:117] "RemoveContainer" containerID="b8a67ed32b9c9a1e5e6bf9dbf3004369662b29dff055e6c1b4a91a6e3f5255f4" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.412613 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-run-httpd\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.412723 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-log-httpd\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.412775 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.412794 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.412836 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-scripts\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.412887 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-config-data\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.412924 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkrd2\" (UniqueName: \"kubernetes.io/projected/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-kube-api-access-fkrd2\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.413408 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-run-httpd\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.415179 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-log-httpd\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.420614 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.420643 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-scripts\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.424466 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-config-data\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.425640 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.432859 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkrd2\" (UniqueName: \"kubernetes.io/projected/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-kube-api-access-fkrd2\") pod \"ceilometer-0\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " pod="openstack/ceilometer-0" Feb 18 12:09:09 crc kubenswrapper[4717]: I0218 12:09:09.574030 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:09:10 crc kubenswrapper[4717]: I0218 12:09:10.121865 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05e71379-15cf-4f83-a548-a46ba29caada","Type":"ContainerStarted","Data":"8b60cbe791da5f83d4e375c9310f14ba1db1fdf0b3787e18a2240c436a77997e"} Feb 18 12:09:10 crc kubenswrapper[4717]: I0218 12:09:10.122732 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05e71379-15cf-4f83-a548-a46ba29caada","Type":"ContainerStarted","Data":"73508b225c76f903cfee3a12eb600c426ef67966bd02d2d2c809389009ed81e8"} Feb 18 12:09:10 crc kubenswrapper[4717]: I0218 12:09:10.137126 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:11 crc kubenswrapper[4717]: I0218 12:09:11.049468 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440fd121-17dd-4c30-b4a0-02d92a770cc6" path="/var/lib/kubelet/pods/440fd121-17dd-4c30-b4a0-02d92a770cc6/volumes" Feb 18 12:09:11 crc kubenswrapper[4717]: I0218 12:09:11.142318 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0288b4d3-0187-4a4d-9c5b-6aad8be516b4","Type":"ContainerStarted","Data":"53db6e5e3790be448dcade39b6514311b98f3251d71a99094d0f04d122ae86e3"} Feb 18 12:09:11 crc kubenswrapper[4717]: I0218 12:09:11.152749 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"05e71379-15cf-4f83-a548-a46ba29caada","Type":"ContainerStarted","Data":"9e1dc07788b0e088955dc8e6ac70c5fd4479a7ec5fbe6f74a7d11989e8f56e7d"} Feb 18 12:09:11 crc kubenswrapper[4717]: I0218 12:09:11.175789 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.175763362 podStartE2EDuration="3.175763362s" podCreationTimestamp="2026-02-18 12:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:11.174699672 +0000 UTC m=+1185.576801018" watchObservedRunningTime="2026-02-18 12:09:11.175763362 +0000 UTC m=+1185.577864678" Feb 18 12:09:11 crc kubenswrapper[4717]: I0218 12:09:11.400555 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:11 crc kubenswrapper[4717]: I0218 12:09:11.400933 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:11 crc kubenswrapper[4717]: I0218 12:09:11.443506 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:11 crc kubenswrapper[4717]: I0218 12:09:11.453404 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:12 crc kubenswrapper[4717]: I0218 12:09:12.166473 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0288b4d3-0187-4a4d-9c5b-6aad8be516b4","Type":"ContainerStarted","Data":"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7"} Feb 18 12:09:12 crc kubenswrapper[4717]: I0218 12:09:12.167006 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:12 crc kubenswrapper[4717]: I0218 12:09:12.167062 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:12 crc kubenswrapper[4717]: I0218 12:09:12.287908 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:14 crc kubenswrapper[4717]: I0218 12:09:14.328818 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:14 crc kubenswrapper[4717]: I0218 12:09:14.329542 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 12:09:14 crc kubenswrapper[4717]: I0218 12:09:14.345055 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 12:09:15 crc kubenswrapper[4717]: I0218 12:09:15.216804 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0288b4d3-0187-4a4d-9c5b-6aad8be516b4","Type":"ContainerStarted","Data":"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42"} Feb 18 12:09:16 crc kubenswrapper[4717]: I0218 12:09:16.241462 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0288b4d3-0187-4a4d-9c5b-6aad8be516b4","Type":"ContainerStarted","Data":"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1"} Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.266294 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0288b4d3-0187-4a4d-9c5b-6aad8be516b4","Type":"ContainerStarted","Data":"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44"} Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.267078 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="ceilometer-central-agent" containerID="cri-o://3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7" gracePeriod=30 Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.267444 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.267735 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="proxy-httpd" containerID="cri-o://9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44" gracePeriod=30 Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.267783 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="sg-core" containerID="cri-o://b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1" gracePeriod=30 Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.267822 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="ceilometer-notification-agent" containerID="cri-o://942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42" gracePeriod=30 Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.306484 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.797762466 podStartE2EDuration="9.306457143s" podCreationTimestamp="2026-02-18 12:09:09 +0000 UTC" firstStartedPulling="2026-02-18 12:09:10.162438258 +0000 UTC m=+1184.564539574" lastFinishedPulling="2026-02-18 12:09:17.671132935 +0000 UTC m=+1192.073234251" observedRunningTime="2026-02-18 12:09:18.2966426 +0000 UTC m=+1192.698743916" watchObservedRunningTime="2026-02-18 12:09:18.306457143 +0000 UTC m=+1192.708558459" Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.500348 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.501180 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.536343 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 12:09:18 crc kubenswrapper[4717]: I0218 12:09:18.548702 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.150210 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.246350 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-log-httpd\") pod \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.246440 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-run-httpd\") pod \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.246514 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkrd2\" (UniqueName: \"kubernetes.io/projected/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-kube-api-access-fkrd2\") pod \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.246578 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-sg-core-conf-yaml\") pod \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.246651 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-config-data\") pod \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.246967 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0288b4d3-0187-4a4d-9c5b-6aad8be516b4" (UID: "0288b4d3-0187-4a4d-9c5b-6aad8be516b4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.247105 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0288b4d3-0187-4a4d-9c5b-6aad8be516b4" (UID: "0288b4d3-0187-4a4d-9c5b-6aad8be516b4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.247550 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-combined-ca-bundle\") pod \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.247627 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-scripts\") pod \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\" (UID: \"0288b4d3-0187-4a4d-9c5b-6aad8be516b4\") " Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.248977 4717 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.249016 4717 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.254025 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-scripts" (OuterVolumeSpecName: "scripts") pod "0288b4d3-0187-4a4d-9c5b-6aad8be516b4" (UID: "0288b4d3-0187-4a4d-9c5b-6aad8be516b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.254060 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-kube-api-access-fkrd2" (OuterVolumeSpecName: "kube-api-access-fkrd2") pod "0288b4d3-0187-4a4d-9c5b-6aad8be516b4" (UID: "0288b4d3-0187-4a4d-9c5b-6aad8be516b4"). InnerVolumeSpecName "kube-api-access-fkrd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.277146 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0288b4d3-0187-4a4d-9c5b-6aad8be516b4" (UID: "0288b4d3-0187-4a4d-9c5b-6aad8be516b4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.282722 4717 generic.go:334] "Generic (PLEG): container finished" podID="ba2e1766-bfe0-4a06-bb70-833b33300ec4" containerID="82bfe9414e2287220f17d75be09b6c40e8a4945cc2f2ddfd9dedaedac4c7c258" exitCode=0 Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.282882 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kk96s" event={"ID":"ba2e1766-bfe0-4a06-bb70-833b33300ec4","Type":"ContainerDied","Data":"82bfe9414e2287220f17d75be09b6c40e8a4945cc2f2ddfd9dedaedac4c7c258"} Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.288448 4717 generic.go:334] "Generic (PLEG): container finished" podID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerID="9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44" exitCode=0 Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.288488 4717 generic.go:334] "Generic (PLEG): container finished" podID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerID="b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1" exitCode=2 Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.288558 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.288574 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0288b4d3-0187-4a4d-9c5b-6aad8be516b4","Type":"ContainerDied","Data":"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44"} Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.288633 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0288b4d3-0187-4a4d-9c5b-6aad8be516b4","Type":"ContainerDied","Data":"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1"} Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.288653 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0288b4d3-0187-4a4d-9c5b-6aad8be516b4","Type":"ContainerDied","Data":"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42"} Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.288674 4717 scope.go:117] "RemoveContainer" containerID="9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.288558 4717 generic.go:334] "Generic (PLEG): container finished" podID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerID="942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42" exitCode=0 Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.289143 4717 generic.go:334] "Generic (PLEG): container finished" podID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerID="3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7" exitCode=0 Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.289478 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0288b4d3-0187-4a4d-9c5b-6aad8be516b4","Type":"ContainerDied","Data":"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7"} Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.289557 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0288b4d3-0187-4a4d-9c5b-6aad8be516b4","Type":"ContainerDied","Data":"53db6e5e3790be448dcade39b6514311b98f3251d71a99094d0f04d122ae86e3"} Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.289832 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.289973 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.332866 4717 scope.go:117] "RemoveContainer" containerID="b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.351805 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0288b4d3-0187-4a4d-9c5b-6aad8be516b4" (UID: "0288b4d3-0187-4a4d-9c5b-6aad8be516b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.356189 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.356238 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.356254 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkrd2\" (UniqueName: \"kubernetes.io/projected/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-kube-api-access-fkrd2\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.356284 4717 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.357899 4717 scope.go:117] "RemoveContainer" containerID="942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.358053 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-config-data" (OuterVolumeSpecName: "config-data") pod "0288b4d3-0187-4a4d-9c5b-6aad8be516b4" (UID: "0288b4d3-0187-4a4d-9c5b-6aad8be516b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.381656 4717 scope.go:117] "RemoveContainer" containerID="3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.408918 4717 scope.go:117] "RemoveContainer" containerID="9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44" Feb 18 12:09:19 crc kubenswrapper[4717]: E0218 12:09:19.409610 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44\": container with ID starting with 9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44 not found: ID does not exist" containerID="9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.409681 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44"} err="failed to get container status \"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44\": rpc error: code = NotFound desc = could not find container \"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44\": container with ID starting with 9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.409721 4717 scope.go:117] "RemoveContainer" containerID="b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1" Feb 18 12:09:19 crc kubenswrapper[4717]: E0218 12:09:19.413005 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1\": container with ID starting with b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1 not found: ID does not exist" containerID="b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.413152 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1"} err="failed to get container status \"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1\": rpc error: code = NotFound desc = could not find container \"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1\": container with ID starting with b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.413289 4717 scope.go:117] "RemoveContainer" containerID="942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42" Feb 18 12:09:19 crc kubenswrapper[4717]: E0218 12:09:19.414517 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42\": container with ID starting with 942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42 not found: ID does not exist" containerID="942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.414576 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42"} err="failed to get container status \"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42\": rpc error: code = NotFound desc = could not find container \"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42\": container with ID starting with 942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.414612 4717 scope.go:117] "RemoveContainer" containerID="3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7" Feb 18 12:09:19 crc kubenswrapper[4717]: E0218 12:09:19.414962 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7\": container with ID starting with 3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7 not found: ID does not exist" containerID="3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.415065 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7"} err="failed to get container status \"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7\": rpc error: code = NotFound desc = could not find container \"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7\": container with ID starting with 3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.415159 4717 scope.go:117] "RemoveContainer" containerID="9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.415497 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44"} err="failed to get container status \"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44\": rpc error: code = NotFound desc = could not find container \"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44\": container with ID starting with 9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.415578 4717 scope.go:117] "RemoveContainer" containerID="b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.415993 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1"} err="failed to get container status \"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1\": rpc error: code = NotFound desc = could not find container \"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1\": container with ID starting with b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.416089 4717 scope.go:117] "RemoveContainer" containerID="942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.416656 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42"} err="failed to get container status \"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42\": rpc error: code = NotFound desc = could not find container \"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42\": container with ID starting with 942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.416776 4717 scope.go:117] "RemoveContainer" containerID="3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.417130 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7"} err="failed to get container status \"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7\": rpc error: code = NotFound desc = could not find container \"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7\": container with ID starting with 3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.417272 4717 scope.go:117] "RemoveContainer" containerID="9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.418866 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44"} err="failed to get container status \"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44\": rpc error: code = NotFound desc = could not find container \"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44\": container with ID starting with 9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.418903 4717 scope.go:117] "RemoveContainer" containerID="b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.419208 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1"} err="failed to get container status \"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1\": rpc error: code = NotFound desc = could not find container \"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1\": container with ID starting with b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.419237 4717 scope.go:117] "RemoveContainer" containerID="942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.419596 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42"} err="failed to get container status \"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42\": rpc error: code = NotFound desc = could not find container \"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42\": container with ID starting with 942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.419693 4717 scope.go:117] "RemoveContainer" containerID="3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.420222 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7"} err="failed to get container status \"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7\": rpc error: code = NotFound desc = could not find container \"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7\": container with ID starting with 3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.420274 4717 scope.go:117] "RemoveContainer" containerID="9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.420682 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44"} err="failed to get container status \"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44\": rpc error: code = NotFound desc = could not find container \"9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44\": container with ID starting with 9954fbecfb38ac2561048cddc8aba9af5754c77893173d036e3ada886ac7ad44 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.420741 4717 scope.go:117] "RemoveContainer" containerID="b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.421112 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1"} err="failed to get container status \"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1\": rpc error: code = NotFound desc = could not find container \"b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1\": container with ID starting with b2a209731be5f547c22a6a9cee94413d117da82cb3180808ab57af89eb89acb1 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.421205 4717 scope.go:117] "RemoveContainer" containerID="942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.421515 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42"} err="failed to get container status \"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42\": rpc error: code = NotFound desc = could not find container \"942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42\": container with ID starting with 942b6b63f733cf0be30a164a235d2f606832328461809764e43698949687ce42 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.421608 4717 scope.go:117] "RemoveContainer" containerID="3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.421911 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7"} err="failed to get container status \"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7\": rpc error: code = NotFound desc = could not find container \"3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7\": container with ID starting with 3c3dc00096b5c64f004ed1dd79251c7f2a008fa79059ebbd81361513ecf689a7 not found: ID does not exist" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.458352 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0288b4d3-0187-4a4d-9c5b-6aad8be516b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.632939 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.639551 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.652627 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:19 crc kubenswrapper[4717]: E0218 12:09:19.653063 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="ceilometer-notification-agent" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.653078 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="ceilometer-notification-agent" Feb 18 12:09:19 crc kubenswrapper[4717]: E0218 12:09:19.653094 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="proxy-httpd" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.653100 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="proxy-httpd" Feb 18 12:09:19 crc kubenswrapper[4717]: E0218 12:09:19.653113 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="sg-core" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.653121 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="sg-core" Feb 18 12:09:19 crc kubenswrapper[4717]: E0218 12:09:19.653133 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="ceilometer-central-agent" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.653140 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="ceilometer-central-agent" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.653348 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="ceilometer-central-agent" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.653364 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="ceilometer-notification-agent" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.653372 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="sg-core" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.653383 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" containerName="proxy-httpd" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.655330 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.665881 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.666348 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.695884 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.770245 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.770405 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-run-httpd\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.770467 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-log-httpd\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.770546 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-scripts\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.770576 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.770604 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h9f8\" (UniqueName: \"kubernetes.io/projected/b15688e6-5868-4aea-94fe-377241de4120-kube-api-access-5h9f8\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.770638 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-config-data\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.872336 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-log-httpd\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.872522 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-scripts\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.872921 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-log-httpd\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.873142 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h9f8\" (UniqueName: \"kubernetes.io/projected/b15688e6-5868-4aea-94fe-377241de4120-kube-api-access-5h9f8\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.873186 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.873215 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-config-data\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.873252 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.873345 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-run-httpd\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.873844 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-run-httpd\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.877409 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-scripts\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.877664 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.878567 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-config-data\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.881026 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.895948 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h9f8\" (UniqueName: \"kubernetes.io/projected/b15688e6-5868-4aea-94fe-377241de4120-kube-api-access-5h9f8\") pod \"ceilometer-0\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " pod="openstack/ceilometer-0" Feb 18 12:09:19 crc kubenswrapper[4717]: I0218 12:09:19.988863 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.585700 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:20 crc kubenswrapper[4717]: W0218 12:09:20.587855 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb15688e6_5868_4aea_94fe_377241de4120.slice/crio-3e993bd71685ae17ce8a259f3bc9e7f7413e24a85b5996ff39556495834a2f86 WatchSource:0}: Error finding container 3e993bd71685ae17ce8a259f3bc9e7f7413e24a85b5996ff39556495834a2f86: Status 404 returned error can't find the container with id 3e993bd71685ae17ce8a259f3bc9e7f7413e24a85b5996ff39556495834a2f86 Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.615421 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.778953 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-combined-ca-bundle\") pod \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.779328 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-config-data\") pod \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.779376 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlnvd\" (UniqueName: \"kubernetes.io/projected/ba2e1766-bfe0-4a06-bb70-833b33300ec4-kube-api-access-qlnvd\") pod \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.779465 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-scripts\") pod \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\" (UID: \"ba2e1766-bfe0-4a06-bb70-833b33300ec4\") " Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.786431 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-scripts" (OuterVolumeSpecName: "scripts") pod "ba2e1766-bfe0-4a06-bb70-833b33300ec4" (UID: "ba2e1766-bfe0-4a06-bb70-833b33300ec4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.786501 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2e1766-bfe0-4a06-bb70-833b33300ec4-kube-api-access-qlnvd" (OuterVolumeSpecName: "kube-api-access-qlnvd") pod "ba2e1766-bfe0-4a06-bb70-833b33300ec4" (UID: "ba2e1766-bfe0-4a06-bb70-833b33300ec4"). InnerVolumeSpecName "kube-api-access-qlnvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.815453 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba2e1766-bfe0-4a06-bb70-833b33300ec4" (UID: "ba2e1766-bfe0-4a06-bb70-833b33300ec4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.820147 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-config-data" (OuterVolumeSpecName: "config-data") pod "ba2e1766-bfe0-4a06-bb70-833b33300ec4" (UID: "ba2e1766-bfe0-4a06-bb70-833b33300ec4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.881979 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.882047 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlnvd\" (UniqueName: \"kubernetes.io/projected/ba2e1766-bfe0-4a06-bb70-833b33300ec4-kube-api-access-qlnvd\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.882063 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:20 crc kubenswrapper[4717]: I0218 12:09:20.882077 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2e1766-bfe0-4a06-bb70-833b33300ec4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.052569 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0288b4d3-0187-4a4d-9c5b-6aad8be516b4" path="/var/lib/kubelet/pods/0288b4d3-0187-4a4d-9c5b-6aad8be516b4/volumes" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.347188 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15688e6-5868-4aea-94fe-377241de4120","Type":"ContainerStarted","Data":"3e993bd71685ae17ce8a259f3bc9e7f7413e24a85b5996ff39556495834a2f86"} Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.349569 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.349592 4717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.349616 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kk96s" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.349547 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kk96s" event={"ID":"ba2e1766-bfe0-4a06-bb70-833b33300ec4","Type":"ContainerDied","Data":"9b949dd61d98c2e31b5399023c69bf4ecbe828d04a4cd84e42c71c8d684c18ff"} Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.349703 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b949dd61d98c2e31b5399023c69bf4ecbe828d04a4cd84e42c71c8d684c18ff" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.441100 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 12:09:21 crc kubenswrapper[4717]: E0218 12:09:21.441591 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2e1766-bfe0-4a06-bb70-833b33300ec4" containerName="nova-cell0-conductor-db-sync" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.441620 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2e1766-bfe0-4a06-bb70-833b33300ec4" containerName="nova-cell0-conductor-db-sync" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.441865 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2e1766-bfe0-4a06-bb70-833b33300ec4" containerName="nova-cell0-conductor-db-sync" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.442631 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.448635 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.449314 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nbvvt" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.457986 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.572776 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.574217 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.597642 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2f56\" (UniqueName: \"kubernetes.io/projected/fb3569ab-95c9-42eb-9c7d-979b7c09f862-kube-api-access-z2f56\") pod \"nova-cell0-conductor-0\" (UID: \"fb3569ab-95c9-42eb-9c7d-979b7c09f862\") " pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.597771 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3569ab-95c9-42eb-9c7d-979b7c09f862-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fb3569ab-95c9-42eb-9c7d-979b7c09f862\") " pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.597807 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3569ab-95c9-42eb-9c7d-979b7c09f862-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fb3569ab-95c9-42eb-9c7d-979b7c09f862\") " pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.700360 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3569ab-95c9-42eb-9c7d-979b7c09f862-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fb3569ab-95c9-42eb-9c7d-979b7c09f862\") " pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.700414 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3569ab-95c9-42eb-9c7d-979b7c09f862-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fb3569ab-95c9-42eb-9c7d-979b7c09f862\") " pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.700518 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2f56\" (UniqueName: \"kubernetes.io/projected/fb3569ab-95c9-42eb-9c7d-979b7c09f862-kube-api-access-z2f56\") pod \"nova-cell0-conductor-0\" (UID: \"fb3569ab-95c9-42eb-9c7d-979b7c09f862\") " pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.709070 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3569ab-95c9-42eb-9c7d-979b7c09f862-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fb3569ab-95c9-42eb-9c7d-979b7c09f862\") " pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.726753 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2f56\" (UniqueName: \"kubernetes.io/projected/fb3569ab-95c9-42eb-9c7d-979b7c09f862-kube-api-access-z2f56\") pod \"nova-cell0-conductor-0\" (UID: \"fb3569ab-95c9-42eb-9c7d-979b7c09f862\") " pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.728125 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3569ab-95c9-42eb-9c7d-979b7c09f862-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fb3569ab-95c9-42eb-9c7d-979b7c09f862\") " pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:21 crc kubenswrapper[4717]: I0218 12:09:21.768433 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:22 crc kubenswrapper[4717]: I0218 12:09:22.366068 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15688e6-5868-4aea-94fe-377241de4120","Type":"ContainerStarted","Data":"615419bb075b7895d127e13b128bdd34b3e0e9912fd72550547dfb5ebdeb62ce"} Feb 18 12:09:22 crc kubenswrapper[4717]: W0218 12:09:22.635498 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb3569ab_95c9_42eb_9c7d_979b7c09f862.slice/crio-3dfbbca4161862164f9ee028a00d474d31528d1448ea20267ecaa00057ca5745 WatchSource:0}: Error finding container 3dfbbca4161862164f9ee028a00d474d31528d1448ea20267ecaa00057ca5745: Status 404 returned error can't find the container with id 3dfbbca4161862164f9ee028a00d474d31528d1448ea20267ecaa00057ca5745 Feb 18 12:09:22 crc kubenswrapper[4717]: I0218 12:09:22.636252 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 12:09:23 crc kubenswrapper[4717]: I0218 12:09:23.382569 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fb3569ab-95c9-42eb-9c7d-979b7c09f862","Type":"ContainerStarted","Data":"dde409af49f87caaae3126ff4e03a0463892ee90dc3d1b11e1136bc48394a2dd"} Feb 18 12:09:23 crc kubenswrapper[4717]: I0218 12:09:23.383386 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fb3569ab-95c9-42eb-9c7d-979b7c09f862","Type":"ContainerStarted","Data":"3dfbbca4161862164f9ee028a00d474d31528d1448ea20267ecaa00057ca5745"} Feb 18 12:09:23 crc kubenswrapper[4717]: I0218 12:09:23.383440 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:23 crc kubenswrapper[4717]: I0218 12:09:23.411089 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.411065602 podStartE2EDuration="2.411065602s" podCreationTimestamp="2026-02-18 12:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:23.398475508 +0000 UTC m=+1197.800576824" watchObservedRunningTime="2026-02-18 12:09:23.411065602 +0000 UTC m=+1197.813166918" Feb 18 12:09:24 crc kubenswrapper[4717]: I0218 12:09:24.394668 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15688e6-5868-4aea-94fe-377241de4120","Type":"ContainerStarted","Data":"f3dc751c957bf94e042d04733d25d5fb55dcaf4216754dddb60589f019a5ccf3"} Feb 18 12:09:24 crc kubenswrapper[4717]: I0218 12:09:24.395108 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15688e6-5868-4aea-94fe-377241de4120","Type":"ContainerStarted","Data":"d9dd14757e2aa7f7ab2a620968e85e352216b3a8c858ee1cda2dcf59943b6a29"} Feb 18 12:09:26 crc kubenswrapper[4717]: I0218 12:09:26.418353 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15688e6-5868-4aea-94fe-377241de4120","Type":"ContainerStarted","Data":"0da26040a548ecc2990eed828e71e9a9e0d7cb330471081b0cb2f5312e197b27"} Feb 18 12:09:26 crc kubenswrapper[4717]: I0218 12:09:26.419183 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 12:09:26 crc kubenswrapper[4717]: I0218 12:09:26.451659 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.469626324 podStartE2EDuration="7.451624151s" podCreationTimestamp="2026-02-18 12:09:19 +0000 UTC" firstStartedPulling="2026-02-18 12:09:20.591448143 +0000 UTC m=+1194.993549459" lastFinishedPulling="2026-02-18 12:09:25.57344598 +0000 UTC m=+1199.975547286" observedRunningTime="2026-02-18 12:09:26.446199355 +0000 UTC m=+1200.848300681" watchObservedRunningTime="2026-02-18 12:09:26.451624151 +0000 UTC m=+1200.853725477" Feb 18 12:09:31 crc kubenswrapper[4717]: I0218 12:09:31.798335 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.274723 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8rrq7"] Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.276112 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.279466 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.281181 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.289744 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rrq7"] Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.359472 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.359534 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-scripts\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.359583 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-config-data\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.359682 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9wtm\" (UniqueName: \"kubernetes.io/projected/91c00704-11f3-4b61-8964-98cd2f711987-kube-api-access-x9wtm\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.461032 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9wtm\" (UniqueName: \"kubernetes.io/projected/91c00704-11f3-4b61-8964-98cd2f711987-kube-api-access-x9wtm\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.461534 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.461567 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-scripts\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.461623 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-config-data\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.468876 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-scripts\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.469977 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-config-data\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.470595 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.484892 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9wtm\" (UniqueName: \"kubernetes.io/projected/91c00704-11f3-4b61-8964-98cd2f711987-kube-api-access-x9wtm\") pod \"nova-cell0-cell-mapping-8rrq7\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.545655 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.547104 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.549406 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.574088 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.575678 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.580524 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.586278 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.600146 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.610438 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.665345 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbk84\" (UniqueName: \"kubernetes.io/projected/cce21079-f0b5-427e-a5f8-ba58efbfed27-kube-api-access-dbk84\") pod \"nova-cell1-novncproxy-0\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.665406 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-config-data\") pod \"nova-scheduler-0\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.665490 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.665555 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm2db\" (UniqueName: \"kubernetes.io/projected/d7a56f42-d311-4ef0-af48-b3eb120ef805-kube-api-access-lm2db\") pod \"nova-scheduler-0\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.665611 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.665636 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.750606 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.752693 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.756220 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.767066 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm2db\" (UniqueName: \"kubernetes.io/projected/d7a56f42-d311-4ef0-af48-b3eb120ef805-kube-api-access-lm2db\") pod \"nova-scheduler-0\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.767139 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.767171 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.767204 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbk84\" (UniqueName: \"kubernetes.io/projected/cce21079-f0b5-427e-a5f8-ba58efbfed27-kube-api-access-dbk84\") pod \"nova-cell1-novncproxy-0\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.767231 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-config-data\") pod \"nova-scheduler-0\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.767301 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.777053 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.780519 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.805328 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.816362 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-config-data\") pod \"nova-scheduler-0\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.832171 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.832780 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm2db\" (UniqueName: \"kubernetes.io/projected/d7a56f42-d311-4ef0-af48-b3eb120ef805-kube-api-access-lm2db\") pod \"nova-scheduler-0\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.835797 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbk84\" (UniqueName: \"kubernetes.io/projected/cce21079-f0b5-427e-a5f8-ba58efbfed27-kube-api-access-dbk84\") pod \"nova-cell1-novncproxy-0\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.869288 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-config-data\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.869912 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.870078 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-logs\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.870296 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpl8x\" (UniqueName: \"kubernetes.io/projected/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-kube-api-access-gpl8x\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.908102 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.921918 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.924167 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.932396 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.932583 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.941917 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.980883 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpl8x\" (UniqueName: \"kubernetes.io/projected/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-kube-api-access-gpl8x\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.980970 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-config-data\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.981077 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-config-data\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.981111 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.981153 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af87b901-682d-453e-adbc-f7f354e6aed6-logs\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.981202 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.981244 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rplxg\" (UniqueName: \"kubernetes.io/projected/af87b901-682d-453e-adbc-f7f354e6aed6-kube-api-access-rplxg\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.981296 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-logs\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.981818 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-logs\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:32 crc kubenswrapper[4717]: I0218 12:09:32.993741 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.000757 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-config-data\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.005746 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpl8x\" (UniqueName: \"kubernetes.io/projected/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-kube-api-access-gpl8x\") pod \"nova-api-0\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " pod="openstack/nova-api-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.089182 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-config-data\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.089289 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.089359 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af87b901-682d-453e-adbc-f7f354e6aed6-logs\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.089479 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rplxg\" (UniqueName: \"kubernetes.io/projected/af87b901-682d-453e-adbc-f7f354e6aed6-kube-api-access-rplxg\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.090024 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af87b901-682d-453e-adbc-f7f354e6aed6-logs\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.102189 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.102394 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-config-data\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.109006 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-snrr6"] Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.141112 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.143940 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-snrr6"] Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.159106 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rplxg\" (UniqueName: \"kubernetes.io/projected/af87b901-682d-453e-adbc-f7f354e6aed6-kube-api-access-rplxg\") pod \"nova-metadata-0\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " pod="openstack/nova-metadata-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.174842 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.191853 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-svc\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.191957 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97n7d\" (UniqueName: \"kubernetes.io/projected/0f7f26e5-3242-42f2-97b2-a989658f9950-kube-api-access-97n7d\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.191982 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-config\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.192080 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.192127 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.192148 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.294855 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.294962 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.295689 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.295804 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-svc\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.295903 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97n7d\" (UniqueName: \"kubernetes.io/projected/0f7f26e5-3242-42f2-97b2-a989658f9950-kube-api-access-97n7d\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.295927 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-config\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.296319 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.296762 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.298319 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.298434 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-svc\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.301887 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-config\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.304969 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.325274 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97n7d\" (UniqueName: \"kubernetes.io/projected/0f7f26e5-3242-42f2-97b2-a989658f9950-kube-api-access-97n7d\") pod \"dnsmasq-dns-757b4f8459-snrr6\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.387114 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rrq7"] Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.475959 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.515523 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rrq7" event={"ID":"91c00704-11f3-4b61-8964-98cd2f711987","Type":"ContainerStarted","Data":"fdc9460823ac4923d3e17be081289f1b3dadb3d1ceec00862548efc18ea0d999"} Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.600959 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2lj58"] Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.603152 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.605927 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.607503 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.617414 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2lj58"] Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.651634 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 12:09:33 crc kubenswrapper[4717]: W0218 12:09:33.668867 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcce21079_f0b5_427e_a5f8_ba58efbfed27.slice/crio-35a282134db5a2affc2c8a01431cd2767e67fc62f6284538215b1a005048519f WatchSource:0}: Error finding container 35a282134db5a2affc2c8a01431cd2767e67fc62f6284538215b1a005048519f: Status 404 returned error can't find the container with id 35a282134db5a2affc2c8a01431cd2767e67fc62f6284538215b1a005048519f Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.695085 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.720319 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flsm4\" (UniqueName: \"kubernetes.io/projected/e776a05b-0cc9-43cc-9554-c534022da512-kube-api-access-flsm4\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.720542 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.720590 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-config-data\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.721602 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-scripts\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.781877 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.823948 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flsm4\" (UniqueName: \"kubernetes.io/projected/e776a05b-0cc9-43cc-9554-c534022da512-kube-api-access-flsm4\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.824051 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.824094 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-config-data\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.824311 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-scripts\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.830035 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.830369 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-config-data\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.830800 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-scripts\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:33 crc kubenswrapper[4717]: I0218 12:09:33.844719 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flsm4\" (UniqueName: \"kubernetes.io/projected/e776a05b-0cc9-43cc-9554-c534022da512-kube-api-access-flsm4\") pod \"nova-cell1-conductor-db-sync-2lj58\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.016578 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:34 crc kubenswrapper[4717]: W0218 12:09:34.020321 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf87b901_682d_453e_adbc_f7f354e6aed6.slice/crio-1c52f906fc8f7a5023523ef10da8553ee7922f86a987a5ba4d8bdd45b925dd17 WatchSource:0}: Error finding container 1c52f906fc8f7a5023523ef10da8553ee7922f86a987a5ba4d8bdd45b925dd17: Status 404 returned error can't find the container with id 1c52f906fc8f7a5023523ef10da8553ee7922f86a987a5ba4d8bdd45b925dd17 Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.066101 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.162389 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-snrr6"] Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.527365 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7a56f42-d311-4ef0-af48-b3eb120ef805","Type":"ContainerStarted","Data":"9fe12f9fb9d4dde05cef802a8198296506da1d47fc0825d6a1a658c68afac9f4"} Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.531451 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rrq7" event={"ID":"91c00704-11f3-4b61-8964-98cd2f711987","Type":"ContainerStarted","Data":"3a420f359ada50e6643af8858c6f43293f9ae71afd4f7ed49900185d3f6bb72c"} Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.539449 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af87b901-682d-453e-adbc-f7f354e6aed6","Type":"ContainerStarted","Data":"1c52f906fc8f7a5023523ef10da8553ee7922f86a987a5ba4d8bdd45b925dd17"} Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.547572 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e","Type":"ContainerStarted","Data":"b397069c99d118063971471efd547687e2219302d15b0ad65e71a5fcf71ea8f9"} Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.554208 4717 generic.go:334] "Generic (PLEG): container finished" podID="0f7f26e5-3242-42f2-97b2-a989658f9950" containerID="b201b857e7f37a66084ba2a6451f11313d97cac41c6d6d96fce6f7fee4944308" exitCode=0 Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.554366 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" event={"ID":"0f7f26e5-3242-42f2-97b2-a989658f9950","Type":"ContainerDied","Data":"b201b857e7f37a66084ba2a6451f11313d97cac41c6d6d96fce6f7fee4944308"} Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.554405 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" event={"ID":"0f7f26e5-3242-42f2-97b2-a989658f9950","Type":"ContainerStarted","Data":"7de73cd576f8b6589042052dbb3009f35ed29ff1de7515daec8154bb8c2829a2"} Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.570467 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8rrq7" podStartSLOduration=2.5704444090000003 podStartE2EDuration="2.570444409s" podCreationTimestamp="2026-02-18 12:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:34.554946701 +0000 UTC m=+1208.957048047" watchObservedRunningTime="2026-02-18 12:09:34.570444409 +0000 UTC m=+1208.972545715" Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.589606 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cce21079-f0b5-427e-a5f8-ba58efbfed27","Type":"ContainerStarted","Data":"35a282134db5a2affc2c8a01431cd2767e67fc62f6284538215b1a005048519f"} Feb 18 12:09:34 crc kubenswrapper[4717]: I0218 12:09:34.663249 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2lj58"] Feb 18 12:09:35 crc kubenswrapper[4717]: I0218 12:09:35.610782 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2lj58" event={"ID":"e776a05b-0cc9-43cc-9554-c534022da512","Type":"ContainerStarted","Data":"36097929a2be8c633219df591fd6edefe007773696afcea09c375684061e58e7"} Feb 18 12:09:35 crc kubenswrapper[4717]: I0218 12:09:35.611253 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2lj58" event={"ID":"e776a05b-0cc9-43cc-9554-c534022da512","Type":"ContainerStarted","Data":"d830012311ca6b5cbca38e4ba9fa73abacfada2ebc1fefc316c9e966024c97aa"} Feb 18 12:09:35 crc kubenswrapper[4717]: I0218 12:09:35.619034 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" event={"ID":"0f7f26e5-3242-42f2-97b2-a989658f9950","Type":"ContainerStarted","Data":"a4840c59ccfede7aeb9ad252d88c81fe2bbbc060d6f93d8e5827b94ee7820edc"} Feb 18 12:09:35 crc kubenswrapper[4717]: I0218 12:09:35.619208 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:35 crc kubenswrapper[4717]: I0218 12:09:35.689687 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-2lj58" podStartSLOduration=2.68965713 podStartE2EDuration="2.68965713s" podCreationTimestamp="2026-02-18 12:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:35.644954889 +0000 UTC m=+1210.047056205" watchObservedRunningTime="2026-02-18 12:09:35.68965713 +0000 UTC m=+1210.091758446" Feb 18 12:09:35 crc kubenswrapper[4717]: I0218 12:09:35.701361 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" podStartSLOduration=3.701335887 podStartE2EDuration="3.701335887s" podCreationTimestamp="2026-02-18 12:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:35.666023168 +0000 UTC m=+1210.068124484" watchObservedRunningTime="2026-02-18 12:09:35.701335887 +0000 UTC m=+1210.103437203" Feb 18 12:09:36 crc kubenswrapper[4717]: I0218 12:09:36.153861 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 12:09:36 crc kubenswrapper[4717]: I0218 12:09:36.168154 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:37 crc kubenswrapper[4717]: I0218 12:09:37.654049 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7a56f42-d311-4ef0-af48-b3eb120ef805","Type":"ContainerStarted","Data":"eb98497e73e4f459ad425bd8b4cdac42ed9913b5b6c583d4bbfccbc291b902c3"} Feb 18 12:09:37 crc kubenswrapper[4717]: I0218 12:09:37.688989 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.294349775 podStartE2EDuration="5.688949109s" podCreationTimestamp="2026-02-18 12:09:32 +0000 UTC" firstStartedPulling="2026-02-18 12:09:33.79833666 +0000 UTC m=+1208.200437976" lastFinishedPulling="2026-02-18 12:09:37.192935994 +0000 UTC m=+1211.595037310" observedRunningTime="2026-02-18 12:09:37.673187784 +0000 UTC m=+1212.075289110" watchObservedRunningTime="2026-02-18 12:09:37.688949109 +0000 UTC m=+1212.091050435" Feb 18 12:09:37 crc kubenswrapper[4717]: I0218 12:09:37.933854 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.665812 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e","Type":"ContainerStarted","Data":"8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d"} Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.666474 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e","Type":"ContainerStarted","Data":"543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7"} Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.667431 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cce21079-f0b5-427e-a5f8-ba58efbfed27","Type":"ContainerStarted","Data":"c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad"} Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.667603 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="cce21079-f0b5-427e-a5f8-ba58efbfed27" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad" gracePeriod=30 Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.672537 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af87b901-682d-453e-adbc-f7f354e6aed6","Type":"ContainerStarted","Data":"3c0dde205d79d0820768347d4319c86de00aa69bd37bb496f38a746e1674e834"} Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.672633 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af87b901-682d-453e-adbc-f7f354e6aed6","Type":"ContainerStarted","Data":"e09bd7107cce0fe76c7d46da5a5401e326581c84a170a2d7ae1853e0421504ea"} Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.672802 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af87b901-682d-453e-adbc-f7f354e6aed6" containerName="nova-metadata-log" containerID="cri-o://e09bd7107cce0fe76c7d46da5a5401e326581c84a170a2d7ae1853e0421504ea" gracePeriod=30 Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.672928 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af87b901-682d-453e-adbc-f7f354e6aed6" containerName="nova-metadata-metadata" containerID="cri-o://3c0dde205d79d0820768347d4319c86de00aa69bd37bb496f38a746e1674e834" gracePeriod=30 Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.691032 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.2092498369999998 podStartE2EDuration="6.690987837s" podCreationTimestamp="2026-02-18 12:09:32 +0000 UTC" firstStartedPulling="2026-02-18 12:09:33.709118074 +0000 UTC m=+1208.111219390" lastFinishedPulling="2026-02-18 12:09:37.190856074 +0000 UTC m=+1211.592957390" observedRunningTime="2026-02-18 12:09:38.690207124 +0000 UTC m=+1213.092308450" watchObservedRunningTime="2026-02-18 12:09:38.690987837 +0000 UTC m=+1213.093089153" Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.724339 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.577484001 podStartE2EDuration="6.724303299s" podCreationTimestamp="2026-02-18 12:09:32 +0000 UTC" firstStartedPulling="2026-02-18 12:09:34.024113491 +0000 UTC m=+1208.426214807" lastFinishedPulling="2026-02-18 12:09:37.170932789 +0000 UTC m=+1211.573034105" observedRunningTime="2026-02-18 12:09:38.715762252 +0000 UTC m=+1213.117863568" watchObservedRunningTime="2026-02-18 12:09:38.724303299 +0000 UTC m=+1213.126404625" Feb 18 12:09:38 crc kubenswrapper[4717]: I0218 12:09:38.746674 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.24813185 podStartE2EDuration="6.746639384s" podCreationTimestamp="2026-02-18 12:09:32 +0000 UTC" firstStartedPulling="2026-02-18 12:09:33.681066284 +0000 UTC m=+1208.083167600" lastFinishedPulling="2026-02-18 12:09:37.179573818 +0000 UTC m=+1211.581675134" observedRunningTime="2026-02-18 12:09:38.738820878 +0000 UTC m=+1213.140922194" watchObservedRunningTime="2026-02-18 12:09:38.746639384 +0000 UTC m=+1213.148740700" Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.686182 4717 generic.go:334] "Generic (PLEG): container finished" podID="af87b901-682d-453e-adbc-f7f354e6aed6" containerID="3c0dde205d79d0820768347d4319c86de00aa69bd37bb496f38a746e1674e834" exitCode=0 Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.687603 4717 generic.go:334] "Generic (PLEG): container finished" podID="af87b901-682d-453e-adbc-f7f354e6aed6" containerID="e09bd7107cce0fe76c7d46da5a5401e326581c84a170a2d7ae1853e0421504ea" exitCode=143 Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.689114 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af87b901-682d-453e-adbc-f7f354e6aed6","Type":"ContainerDied","Data":"3c0dde205d79d0820768347d4319c86de00aa69bd37bb496f38a746e1674e834"} Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.689225 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af87b901-682d-453e-adbc-f7f354e6aed6","Type":"ContainerDied","Data":"e09bd7107cce0fe76c7d46da5a5401e326581c84a170a2d7ae1853e0421504ea"} Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.689324 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af87b901-682d-453e-adbc-f7f354e6aed6","Type":"ContainerDied","Data":"1c52f906fc8f7a5023523ef10da8553ee7922f86a987a5ba4d8bdd45b925dd17"} Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.689412 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c52f906fc8f7a5023523ef10da8553ee7922f86a987a5ba4d8bdd45b925dd17" Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.704730 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.803450 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rplxg\" (UniqueName: \"kubernetes.io/projected/af87b901-682d-453e-adbc-f7f354e6aed6-kube-api-access-rplxg\") pod \"af87b901-682d-453e-adbc-f7f354e6aed6\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.803916 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af87b901-682d-453e-adbc-f7f354e6aed6-logs\") pod \"af87b901-682d-453e-adbc-f7f354e6aed6\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.803984 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-combined-ca-bundle\") pod \"af87b901-682d-453e-adbc-f7f354e6aed6\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.804030 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-config-data\") pod \"af87b901-682d-453e-adbc-f7f354e6aed6\" (UID: \"af87b901-682d-453e-adbc-f7f354e6aed6\") " Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.805226 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af87b901-682d-453e-adbc-f7f354e6aed6-logs" (OuterVolumeSpecName: "logs") pod "af87b901-682d-453e-adbc-f7f354e6aed6" (UID: "af87b901-682d-453e-adbc-f7f354e6aed6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.816538 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af87b901-682d-453e-adbc-f7f354e6aed6-kube-api-access-rplxg" (OuterVolumeSpecName: "kube-api-access-rplxg") pod "af87b901-682d-453e-adbc-f7f354e6aed6" (UID: "af87b901-682d-453e-adbc-f7f354e6aed6"). InnerVolumeSpecName "kube-api-access-rplxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.855579 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af87b901-682d-453e-adbc-f7f354e6aed6" (UID: "af87b901-682d-453e-adbc-f7f354e6aed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.860916 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-config-data" (OuterVolumeSpecName: "config-data") pod "af87b901-682d-453e-adbc-f7f354e6aed6" (UID: "af87b901-682d-453e-adbc-f7f354e6aed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.907643 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af87b901-682d-453e-adbc-f7f354e6aed6-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.907743 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.907758 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af87b901-682d-453e-adbc-f7f354e6aed6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:39 crc kubenswrapper[4717]: I0218 12:09:39.907768 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rplxg\" (UniqueName: \"kubernetes.io/projected/af87b901-682d-453e-adbc-f7f354e6aed6-kube-api-access-rplxg\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.698007 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.745630 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.763462 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.778293 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:40 crc kubenswrapper[4717]: E0218 12:09:40.778952 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af87b901-682d-453e-adbc-f7f354e6aed6" containerName="nova-metadata-metadata" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.778973 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="af87b901-682d-453e-adbc-f7f354e6aed6" containerName="nova-metadata-metadata" Feb 18 12:09:40 crc kubenswrapper[4717]: E0218 12:09:40.779045 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af87b901-682d-453e-adbc-f7f354e6aed6" containerName="nova-metadata-log" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.779055 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="af87b901-682d-453e-adbc-f7f354e6aed6" containerName="nova-metadata-log" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.779338 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="af87b901-682d-453e-adbc-f7f354e6aed6" containerName="nova-metadata-metadata" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.779372 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="af87b901-682d-453e-adbc-f7f354e6aed6" containerName="nova-metadata-log" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.780837 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.784819 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.790363 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.791911 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.832598 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36e9a87b-3c85-45a5-8677-02d69052038c-logs\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.832784 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.832865 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.832894 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-config-data\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.832918 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zwxn\" (UniqueName: \"kubernetes.io/projected/36e9a87b-3c85-45a5-8677-02d69052038c-kube-api-access-2zwxn\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.935245 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.935319 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-config-data\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.935346 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zwxn\" (UniqueName: \"kubernetes.io/projected/36e9a87b-3c85-45a5-8677-02d69052038c-kube-api-access-2zwxn\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.935443 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36e9a87b-3c85-45a5-8677-02d69052038c-logs\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.935509 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.936773 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36e9a87b-3c85-45a5-8677-02d69052038c-logs\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.940737 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-config-data\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.942398 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.949966 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:40 crc kubenswrapper[4717]: I0218 12:09:40.954533 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zwxn\" (UniqueName: \"kubernetes.io/projected/36e9a87b-3c85-45a5-8677-02d69052038c-kube-api-access-2zwxn\") pod \"nova-metadata-0\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " pod="openstack/nova-metadata-0" Feb 18 12:09:41 crc kubenswrapper[4717]: I0218 12:09:41.065959 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af87b901-682d-453e-adbc-f7f354e6aed6" path="/var/lib/kubelet/pods/af87b901-682d-453e-adbc-f7f354e6aed6/volumes" Feb 18 12:09:41 crc kubenswrapper[4717]: I0218 12:09:41.114843 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:09:41 crc kubenswrapper[4717]: I0218 12:09:41.638779 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:41 crc kubenswrapper[4717]: I0218 12:09:41.710458 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36e9a87b-3c85-45a5-8677-02d69052038c","Type":"ContainerStarted","Data":"0dc49a27ad05ceba2b87665049cf864c00a1696e5473b6866277610b8f6ed590"} Feb 18 12:09:42 crc kubenswrapper[4717]: I0218 12:09:42.722503 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36e9a87b-3c85-45a5-8677-02d69052038c","Type":"ContainerStarted","Data":"208f42341cb01cadd43ec8568207272847a6240d7fafb715ea6eb7d0ac51f16f"} Feb 18 12:09:42 crc kubenswrapper[4717]: I0218 12:09:42.723007 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36e9a87b-3c85-45a5-8677-02d69052038c","Type":"ContainerStarted","Data":"5e344eba368bc120816d5b921d30ee61f3bc118a59b00e452ec03584034ecc80"} Feb 18 12:09:42 crc kubenswrapper[4717]: I0218 12:09:42.727588 4717 generic.go:334] "Generic (PLEG): container finished" podID="91c00704-11f3-4b61-8964-98cd2f711987" containerID="3a420f359ada50e6643af8858c6f43293f9ae71afd4f7ed49900185d3f6bb72c" exitCode=0 Feb 18 12:09:42 crc kubenswrapper[4717]: I0218 12:09:42.727623 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rrq7" event={"ID":"91c00704-11f3-4b61-8964-98cd2f711987","Type":"ContainerDied","Data":"3a420f359ada50e6643af8858c6f43293f9ae71afd4f7ed49900185d3f6bb72c"} Feb 18 12:09:42 crc kubenswrapper[4717]: I0218 12:09:42.754333 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7543061829999997 podStartE2EDuration="2.754306183s" podCreationTimestamp="2026-02-18 12:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:42.748710881 +0000 UTC m=+1217.150812217" watchObservedRunningTime="2026-02-18 12:09:42.754306183 +0000 UTC m=+1217.156407499" Feb 18 12:09:42 crc kubenswrapper[4717]: I0218 12:09:42.775985 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:09:42 crc kubenswrapper[4717]: I0218 12:09:42.776071 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:09:42 crc kubenswrapper[4717]: I0218 12:09:42.911720 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:09:42 crc kubenswrapper[4717]: I0218 12:09:42.933429 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 12:09:42 crc kubenswrapper[4717]: I0218 12:09:42.974483 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 12:09:43 crc kubenswrapper[4717]: I0218 12:09:43.180047 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 12:09:43 crc kubenswrapper[4717]: I0218 12:09:43.180512 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 12:09:43 crc kubenswrapper[4717]: I0218 12:09:43.478545 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:09:43 crc kubenswrapper[4717]: I0218 12:09:43.553008 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-mmbsj"] Feb 18 12:09:43 crc kubenswrapper[4717]: I0218 12:09:43.553414 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" podUID="40820fca-bd29-4dfa-bfb3-04a2209eee32" containerName="dnsmasq-dns" containerID="cri-o://194a4b144ace41854919e7b492b867efb56c6e1652ddaf14eddd579e9b1c39ee" gracePeriod=10 Feb 18 12:09:43 crc kubenswrapper[4717]: I0218 12:09:43.741275 4717 generic.go:334] "Generic (PLEG): container finished" podID="40820fca-bd29-4dfa-bfb3-04a2209eee32" containerID="194a4b144ace41854919e7b492b867efb56c6e1652ddaf14eddd579e9b1c39ee" exitCode=0 Feb 18 12:09:43 crc kubenswrapper[4717]: I0218 12:09:43.741366 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" event={"ID":"40820fca-bd29-4dfa-bfb3-04a2209eee32","Type":"ContainerDied","Data":"194a4b144ace41854919e7b492b867efb56c6e1652ddaf14eddd579e9b1c39ee"} Feb 18 12:09:43 crc kubenswrapper[4717]: I0218 12:09:43.743525 4717 generic.go:334] "Generic (PLEG): container finished" podID="e776a05b-0cc9-43cc-9554-c534022da512" containerID="36097929a2be8c633219df591fd6edefe007773696afcea09c375684061e58e7" exitCode=0 Feb 18 12:09:43 crc kubenswrapper[4717]: I0218 12:09:43.744698 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2lj58" event={"ID":"e776a05b-0cc9-43cc-9554-c534022da512","Type":"ContainerDied","Data":"36097929a2be8c633219df591fd6edefe007773696afcea09c375684061e58e7"} Feb 18 12:09:43 crc kubenswrapper[4717]: I0218 12:09:43.828072 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.221476 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.221982 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.278403 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.286044 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.426985 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-combined-ca-bundle\") pod \"91c00704-11f3-4b61-8964-98cd2f711987\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.427104 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-config-data\") pod \"91c00704-11f3-4b61-8964-98cd2f711987\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.427245 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-svc\") pod \"40820fca-bd29-4dfa-bfb3-04a2209eee32\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.427294 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-config\") pod \"40820fca-bd29-4dfa-bfb3-04a2209eee32\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.427329 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-scripts\") pod \"91c00704-11f3-4b61-8964-98cd2f711987\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.427422 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-sb\") pod \"40820fca-bd29-4dfa-bfb3-04a2209eee32\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.427448 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-nb\") pod \"40820fca-bd29-4dfa-bfb3-04a2209eee32\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.427505 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fk4q\" (UniqueName: \"kubernetes.io/projected/40820fca-bd29-4dfa-bfb3-04a2209eee32-kube-api-access-2fk4q\") pod \"40820fca-bd29-4dfa-bfb3-04a2209eee32\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.427527 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-swift-storage-0\") pod \"40820fca-bd29-4dfa-bfb3-04a2209eee32\" (UID: \"40820fca-bd29-4dfa-bfb3-04a2209eee32\") " Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.427562 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9wtm\" (UniqueName: \"kubernetes.io/projected/91c00704-11f3-4b61-8964-98cd2f711987-kube-api-access-x9wtm\") pod \"91c00704-11f3-4b61-8964-98cd2f711987\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.437423 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c00704-11f3-4b61-8964-98cd2f711987-kube-api-access-x9wtm" (OuterVolumeSpecName: "kube-api-access-x9wtm") pod "91c00704-11f3-4b61-8964-98cd2f711987" (UID: "91c00704-11f3-4b61-8964-98cd2f711987"). InnerVolumeSpecName "kube-api-access-x9wtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.440554 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-scripts" (OuterVolumeSpecName: "scripts") pod "91c00704-11f3-4b61-8964-98cd2f711987" (UID: "91c00704-11f3-4b61-8964-98cd2f711987"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.445763 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40820fca-bd29-4dfa-bfb3-04a2209eee32-kube-api-access-2fk4q" (OuterVolumeSpecName: "kube-api-access-2fk4q") pod "40820fca-bd29-4dfa-bfb3-04a2209eee32" (UID: "40820fca-bd29-4dfa-bfb3-04a2209eee32"). InnerVolumeSpecName "kube-api-access-2fk4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:44 crc kubenswrapper[4717]: E0218 12:09:44.467373 4717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-combined-ca-bundle podName:91c00704-11f3-4b61-8964-98cd2f711987 nodeName:}" failed. No retries permitted until 2026-02-18 12:09:44.967325795 +0000 UTC m=+1219.369427111 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-combined-ca-bundle") pod "91c00704-11f3-4b61-8964-98cd2f711987" (UID: "91c00704-11f3-4b61-8964-98cd2f711987") : error deleting /var/lib/kubelet/pods/91c00704-11f3-4b61-8964-98cd2f711987/volume-subpaths: remove /var/lib/kubelet/pods/91c00704-11f3-4b61-8964-98cd2f711987/volume-subpaths: no such file or directory Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.471077 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-config-data" (OuterVolumeSpecName: "config-data") pod "91c00704-11f3-4b61-8964-98cd2f711987" (UID: "91c00704-11f3-4b61-8964-98cd2f711987"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.492875 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40820fca-bd29-4dfa-bfb3-04a2209eee32" (UID: "40820fca-bd29-4dfa-bfb3-04a2209eee32"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.493089 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "40820fca-bd29-4dfa-bfb3-04a2209eee32" (UID: "40820fca-bd29-4dfa-bfb3-04a2209eee32"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.494379 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40820fca-bd29-4dfa-bfb3-04a2209eee32" (UID: "40820fca-bd29-4dfa-bfb3-04a2209eee32"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.507082 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40820fca-bd29-4dfa-bfb3-04a2209eee32" (UID: "40820fca-bd29-4dfa-bfb3-04a2209eee32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.522204 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-config" (OuterVolumeSpecName: "config") pod "40820fca-bd29-4dfa-bfb3-04a2209eee32" (UID: "40820fca-bd29-4dfa-bfb3-04a2209eee32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.530611 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.530651 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.530665 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.530674 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.530686 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fk4q\" (UniqueName: \"kubernetes.io/projected/40820fca-bd29-4dfa-bfb3-04a2209eee32-kube-api-access-2fk4q\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.530699 4717 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.530708 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9wtm\" (UniqueName: \"kubernetes.io/projected/91c00704-11f3-4b61-8964-98cd2f711987-kube-api-access-x9wtm\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.530717 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.530729 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40820fca-bd29-4dfa-bfb3-04a2209eee32-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.755563 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8rrq7" event={"ID":"91c00704-11f3-4b61-8964-98cd2f711987","Type":"ContainerDied","Data":"fdc9460823ac4923d3e17be081289f1b3dadb3d1ceec00862548efc18ea0d999"} Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.755627 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdc9460823ac4923d3e17be081289f1b3dadb3d1ceec00862548efc18ea0d999" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.755737 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8rrq7" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.758795 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.759979 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-mmbsj" event={"ID":"40820fca-bd29-4dfa-bfb3-04a2209eee32","Type":"ContainerDied","Data":"cd53a54b50f43063bedcdcad32742cf8e677e7a841580e6af8937f5326e79f25"} Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.760069 4717 scope.go:117] "RemoveContainer" containerID="194a4b144ace41854919e7b492b867efb56c6e1652ddaf14eddd579e9b1c39ee" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.816136 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-mmbsj"] Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.831327 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-mmbsj"] Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.834749 4717 scope.go:117] "RemoveContainer" containerID="de6fdc4ab3e16f415093c32892d08f9c29a89ce45dfa06b8951e342d90b64e3e" Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.974730 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.975089 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerName="nova-api-log" containerID="cri-o://543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7" gracePeriod=30 Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.975725 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerName="nova-api-api" containerID="cri-o://8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d" gracePeriod=30 Feb 18 12:09:44 crc kubenswrapper[4717]: I0218 12:09:44.985417 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.043888 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-combined-ca-bundle\") pod \"91c00704-11f3-4b61-8964-98cd2f711987\" (UID: \"91c00704-11f3-4b61-8964-98cd2f711987\") " Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.061443 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40820fca-bd29-4dfa-bfb3-04a2209eee32" path="/var/lib/kubelet/pods/40820fca-bd29-4dfa-bfb3-04a2209eee32/volumes" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.085432 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91c00704-11f3-4b61-8964-98cd2f711987" (UID: "91c00704-11f3-4b61-8964-98cd2f711987"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.150999 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c00704-11f3-4b61-8964-98cd2f711987-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.204440 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.204744 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36e9a87b-3c85-45a5-8677-02d69052038c" containerName="nova-metadata-log" containerID="cri-o://5e344eba368bc120816d5b921d30ee61f3bc118a59b00e452ec03584034ecc80" gracePeriod=30 Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.205492 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36e9a87b-3c85-45a5-8677-02d69052038c" containerName="nova-metadata-metadata" containerID="cri-o://208f42341cb01cadd43ec8568207272847a6240d7fafb715ea6eb7d0ac51f16f" gracePeriod=30 Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.219010 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.354345 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-scripts\") pod \"e776a05b-0cc9-43cc-9554-c534022da512\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.354476 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-config-data\") pod \"e776a05b-0cc9-43cc-9554-c534022da512\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.354715 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-combined-ca-bundle\") pod \"e776a05b-0cc9-43cc-9554-c534022da512\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.354777 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flsm4\" (UniqueName: \"kubernetes.io/projected/e776a05b-0cc9-43cc-9554-c534022da512-kube-api-access-flsm4\") pod \"e776a05b-0cc9-43cc-9554-c534022da512\" (UID: \"e776a05b-0cc9-43cc-9554-c534022da512\") " Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.361420 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-scripts" (OuterVolumeSpecName: "scripts") pod "e776a05b-0cc9-43cc-9554-c534022da512" (UID: "e776a05b-0cc9-43cc-9554-c534022da512"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.361449 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e776a05b-0cc9-43cc-9554-c534022da512-kube-api-access-flsm4" (OuterVolumeSpecName: "kube-api-access-flsm4") pod "e776a05b-0cc9-43cc-9554-c534022da512" (UID: "e776a05b-0cc9-43cc-9554-c534022da512"). InnerVolumeSpecName "kube-api-access-flsm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.383841 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e776a05b-0cc9-43cc-9554-c534022da512" (UID: "e776a05b-0cc9-43cc-9554-c534022da512"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.393224 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-config-data" (OuterVolumeSpecName: "config-data") pod "e776a05b-0cc9-43cc-9554-c534022da512" (UID: "e776a05b-0cc9-43cc-9554-c534022da512"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.457349 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flsm4\" (UniqueName: \"kubernetes.io/projected/e776a05b-0cc9-43cc-9554-c534022da512-kube-api-access-flsm4\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.457388 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.457398 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.457410 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e776a05b-0cc9-43cc-9554-c534022da512-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.775347 4717 generic.go:334] "Generic (PLEG): container finished" podID="36e9a87b-3c85-45a5-8677-02d69052038c" containerID="208f42341cb01cadd43ec8568207272847a6240d7fafb715ea6eb7d0ac51f16f" exitCode=0 Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.775396 4717 generic.go:334] "Generic (PLEG): container finished" podID="36e9a87b-3c85-45a5-8677-02d69052038c" containerID="5e344eba368bc120816d5b921d30ee61f3bc118a59b00e452ec03584034ecc80" exitCode=143 Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.775453 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36e9a87b-3c85-45a5-8677-02d69052038c","Type":"ContainerDied","Data":"208f42341cb01cadd43ec8568207272847a6240d7fafb715ea6eb7d0ac51f16f"} Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.775495 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36e9a87b-3c85-45a5-8677-02d69052038c","Type":"ContainerDied","Data":"5e344eba368bc120816d5b921d30ee61f3bc118a59b00e452ec03584034ecc80"} Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.775507 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36e9a87b-3c85-45a5-8677-02d69052038c","Type":"ContainerDied","Data":"0dc49a27ad05ceba2b87665049cf864c00a1696e5473b6866277610b8f6ed590"} Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.775519 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dc49a27ad05ceba2b87665049cf864c00a1696e5473b6866277610b8f6ed590" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.781145 4717 generic.go:334] "Generic (PLEG): container finished" podID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerID="543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7" exitCode=143 Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.781229 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e","Type":"ContainerDied","Data":"543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7"} Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.783040 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d7a56f42-d311-4ef0-af48-b3eb120ef805" containerName="nova-scheduler-scheduler" containerID="cri-o://eb98497e73e4f459ad425bd8b4cdac42ed9913b5b6c583d4bbfccbc291b902c3" gracePeriod=30 Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.783498 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2lj58" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.784536 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2lj58" event={"ID":"e776a05b-0cc9-43cc-9554-c534022da512","Type":"ContainerDied","Data":"d830012311ca6b5cbca38e4ba9fa73abacfada2ebc1fefc316c9e966024c97aa"} Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.784605 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d830012311ca6b5cbca38e4ba9fa73abacfada2ebc1fefc316c9e966024c97aa" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.879425 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 12:09:45 crc kubenswrapper[4717]: E0218 12:09:45.879993 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40820fca-bd29-4dfa-bfb3-04a2209eee32" containerName="dnsmasq-dns" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.880021 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="40820fca-bd29-4dfa-bfb3-04a2209eee32" containerName="dnsmasq-dns" Feb 18 12:09:45 crc kubenswrapper[4717]: E0218 12:09:45.880036 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c00704-11f3-4b61-8964-98cd2f711987" containerName="nova-manage" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.880043 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c00704-11f3-4b61-8964-98cd2f711987" containerName="nova-manage" Feb 18 12:09:45 crc kubenswrapper[4717]: E0218 12:09:45.880056 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e776a05b-0cc9-43cc-9554-c534022da512" containerName="nova-cell1-conductor-db-sync" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.880066 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="e776a05b-0cc9-43cc-9554-c534022da512" containerName="nova-cell1-conductor-db-sync" Feb 18 12:09:45 crc kubenswrapper[4717]: E0218 12:09:45.880088 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40820fca-bd29-4dfa-bfb3-04a2209eee32" containerName="init" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.880094 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="40820fca-bd29-4dfa-bfb3-04a2209eee32" containerName="init" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.880312 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="e776a05b-0cc9-43cc-9554-c534022da512" containerName="nova-cell1-conductor-db-sync" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.880326 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="40820fca-bd29-4dfa-bfb3-04a2209eee32" containerName="dnsmasq-dns" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.880353 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c00704-11f3-4b61-8964-98cd2f711987" containerName="nova-manage" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.881157 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.887211 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.891937 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.917962 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.973550 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kfhs\" (UniqueName: \"kubernetes.io/projected/377276ef-9093-4bae-954b-b833c89261ea-kube-api-access-8kfhs\") pod \"nova-cell1-conductor-0\" (UID: \"377276ef-9093-4bae-954b-b833c89261ea\") " pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.973890 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377276ef-9093-4bae-954b-b833c89261ea-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"377276ef-9093-4bae-954b-b833c89261ea\") " pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:45 crc kubenswrapper[4717]: I0218 12:09:45.974063 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377276ef-9093-4bae-954b-b833c89261ea-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"377276ef-9093-4bae-954b-b833c89261ea\") " pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.076114 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-config-data\") pod \"36e9a87b-3c85-45a5-8677-02d69052038c\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.076319 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-nova-metadata-tls-certs\") pod \"36e9a87b-3c85-45a5-8677-02d69052038c\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.076437 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36e9a87b-3c85-45a5-8677-02d69052038c-logs\") pod \"36e9a87b-3c85-45a5-8677-02d69052038c\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.076470 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zwxn\" (UniqueName: \"kubernetes.io/projected/36e9a87b-3c85-45a5-8677-02d69052038c-kube-api-access-2zwxn\") pod \"36e9a87b-3c85-45a5-8677-02d69052038c\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.076533 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-combined-ca-bundle\") pod \"36e9a87b-3c85-45a5-8677-02d69052038c\" (UID: \"36e9a87b-3c85-45a5-8677-02d69052038c\") " Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.076880 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377276ef-9093-4bae-954b-b833c89261ea-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"377276ef-9093-4bae-954b-b833c89261ea\") " pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.077002 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377276ef-9093-4bae-954b-b833c89261ea-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"377276ef-9093-4bae-954b-b833c89261ea\") " pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.077151 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kfhs\" (UniqueName: \"kubernetes.io/projected/377276ef-9093-4bae-954b-b833c89261ea-kube-api-access-8kfhs\") pod \"nova-cell1-conductor-0\" (UID: \"377276ef-9093-4bae-954b-b833c89261ea\") " pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.077208 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36e9a87b-3c85-45a5-8677-02d69052038c-logs" (OuterVolumeSpecName: "logs") pod "36e9a87b-3c85-45a5-8677-02d69052038c" (UID: "36e9a87b-3c85-45a5-8677-02d69052038c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.083683 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/377276ef-9093-4bae-954b-b833c89261ea-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"377276ef-9093-4bae-954b-b833c89261ea\") " pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.084051 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/377276ef-9093-4bae-954b-b833c89261ea-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"377276ef-9093-4bae-954b-b833c89261ea\") " pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.084434 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e9a87b-3c85-45a5-8677-02d69052038c-kube-api-access-2zwxn" (OuterVolumeSpecName: "kube-api-access-2zwxn") pod "36e9a87b-3c85-45a5-8677-02d69052038c" (UID: "36e9a87b-3c85-45a5-8677-02d69052038c"). InnerVolumeSpecName "kube-api-access-2zwxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.105838 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kfhs\" (UniqueName: \"kubernetes.io/projected/377276ef-9093-4bae-954b-b833c89261ea-kube-api-access-8kfhs\") pod \"nova-cell1-conductor-0\" (UID: \"377276ef-9093-4bae-954b-b833c89261ea\") " pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.106147 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36e9a87b-3c85-45a5-8677-02d69052038c" (UID: "36e9a87b-3c85-45a5-8677-02d69052038c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.108124 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-config-data" (OuterVolumeSpecName: "config-data") pod "36e9a87b-3c85-45a5-8677-02d69052038c" (UID: "36e9a87b-3c85-45a5-8677-02d69052038c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.139396 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "36e9a87b-3c85-45a5-8677-02d69052038c" (UID: "36e9a87b-3c85-45a5-8677-02d69052038c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.179780 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.179829 4717 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.179843 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36e9a87b-3c85-45a5-8677-02d69052038c-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.179858 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zwxn\" (UniqueName: \"kubernetes.io/projected/36e9a87b-3c85-45a5-8677-02d69052038c-kube-api-access-2zwxn\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.179870 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e9a87b-3c85-45a5-8677-02d69052038c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.234784 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.696515 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.796207 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"377276ef-9093-4bae-954b-b833c89261ea","Type":"ContainerStarted","Data":"31f871dfcf74643a8720cd455dd146490eb8ebe047a7b66fb12606f4e3cd065c"} Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.796282 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.902242 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.930980 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.958905 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:46 crc kubenswrapper[4717]: E0218 12:09:46.960376 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e9a87b-3c85-45a5-8677-02d69052038c" containerName="nova-metadata-metadata" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.960427 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e9a87b-3c85-45a5-8677-02d69052038c" containerName="nova-metadata-metadata" Feb 18 12:09:46 crc kubenswrapper[4717]: E0218 12:09:46.960470 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e9a87b-3c85-45a5-8677-02d69052038c" containerName="nova-metadata-log" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.960478 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e9a87b-3c85-45a5-8677-02d69052038c" containerName="nova-metadata-log" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.960706 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e9a87b-3c85-45a5-8677-02d69052038c" containerName="nova-metadata-metadata" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.960725 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e9a87b-3c85-45a5-8677-02d69052038c" containerName="nova-metadata-log" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.962160 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.969658 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.969910 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 12:09:46 crc kubenswrapper[4717]: I0218 12:09:46.973947 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.051381 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36e9a87b-3c85-45a5-8677-02d69052038c" path="/var/lib/kubelet/pods/36e9a87b-3c85-45a5-8677-02d69052038c/volumes" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.101046 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d7bf71-c090-4908-a5b3-0e5b1551b137-logs\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.101177 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-config-data\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.101228 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.101281 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rhhk\" (UniqueName: \"kubernetes.io/projected/92d7bf71-c090-4908-a5b3-0e5b1551b137-kube-api-access-8rhhk\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.101375 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.203328 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.203415 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d7bf71-c090-4908-a5b3-0e5b1551b137-logs\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.203550 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-config-data\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.203598 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.203620 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rhhk\" (UniqueName: \"kubernetes.io/projected/92d7bf71-c090-4908-a5b3-0e5b1551b137-kube-api-access-8rhhk\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.204168 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d7bf71-c090-4908-a5b3-0e5b1551b137-logs\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.210578 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.210633 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-config-data\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.211209 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.227303 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rhhk\" (UniqueName: \"kubernetes.io/projected/92d7bf71-c090-4908-a5b3-0e5b1551b137-kube-api-access-8rhhk\") pod \"nova-metadata-0\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.285879 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.757178 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:09:47 crc kubenswrapper[4717]: W0218 12:09:47.762347 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92d7bf71_c090_4908_a5b3_0e5b1551b137.slice/crio-5b0de8107314b95246b124fe8b9fa2dfb7c0660e6a35febe9464bfb738b04aa0 WatchSource:0}: Error finding container 5b0de8107314b95246b124fe8b9fa2dfb7c0660e6a35febe9464bfb738b04aa0: Status 404 returned error can't find the container with id 5b0de8107314b95246b124fe8b9fa2dfb7c0660e6a35febe9464bfb738b04aa0 Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.811427 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"377276ef-9093-4bae-954b-b833c89261ea","Type":"ContainerStarted","Data":"2f0cd4385c07e24f47c176817ba647f830e47665c905cb2304e4f439a3127d18"} Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.812127 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.816704 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d7bf71-c090-4908-a5b3-0e5b1551b137","Type":"ContainerStarted","Data":"5b0de8107314b95246b124fe8b9fa2dfb7c0660e6a35febe9464bfb738b04aa0"} Feb 18 12:09:47 crc kubenswrapper[4717]: I0218 12:09:47.847467 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.847439621 podStartE2EDuration="2.847439621s" podCreationTimestamp="2026-02-18 12:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:47.83286314 +0000 UTC m=+1222.234964456" watchObservedRunningTime="2026-02-18 12:09:47.847439621 +0000 UTC m=+1222.249540937" Feb 18 12:09:47 crc kubenswrapper[4717]: E0218 12:09:47.935292 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb98497e73e4f459ad425bd8b4cdac42ed9913b5b6c583d4bbfccbc291b902c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 12:09:47 crc kubenswrapper[4717]: E0218 12:09:47.936980 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb98497e73e4f459ad425bd8b4cdac42ed9913b5b6c583d4bbfccbc291b902c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 12:09:47 crc kubenswrapper[4717]: E0218 12:09:47.940751 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb98497e73e4f459ad425bd8b4cdac42ed9913b5b6c583d4bbfccbc291b902c3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 12:09:47 crc kubenswrapper[4717]: E0218 12:09:47.940823 4717 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d7a56f42-d311-4ef0-af48-b3eb120ef805" containerName="nova-scheduler-scheduler" Feb 18 12:09:48 crc kubenswrapper[4717]: I0218 12:09:48.877184 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d7bf71-c090-4908-a5b3-0e5b1551b137","Type":"ContainerStarted","Data":"1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040"} Feb 18 12:09:48 crc kubenswrapper[4717]: I0218 12:09:48.878154 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d7bf71-c090-4908-a5b3-0e5b1551b137","Type":"ContainerStarted","Data":"6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34"} Feb 18 12:09:48 crc kubenswrapper[4717]: I0218 12:09:48.907963 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.907943188 podStartE2EDuration="2.907943188s" podCreationTimestamp="2026-02-18 12:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:48.902640415 +0000 UTC m=+1223.304741731" watchObservedRunningTime="2026-02-18 12:09:48.907943188 +0000 UTC m=+1223.310044504" Feb 18 12:09:49 crc kubenswrapper[4717]: I0218 12:09:49.897194 4717 generic.go:334] "Generic (PLEG): container finished" podID="d7a56f42-d311-4ef0-af48-b3eb120ef805" containerID="eb98497e73e4f459ad425bd8b4cdac42ed9913b5b6c583d4bbfccbc291b902c3" exitCode=0 Feb 18 12:09:49 crc kubenswrapper[4717]: I0218 12:09:49.897343 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7a56f42-d311-4ef0-af48-b3eb120ef805","Type":"ContainerDied","Data":"eb98497e73e4f459ad425bd8b4cdac42ed9913b5b6c583d4bbfccbc291b902c3"} Feb 18 12:09:49 crc kubenswrapper[4717]: I0218 12:09:49.897733 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d7a56f42-d311-4ef0-af48-b3eb120ef805","Type":"ContainerDied","Data":"9fe12f9fb9d4dde05cef802a8198296506da1d47fc0825d6a1a658c68afac9f4"} Feb 18 12:09:49 crc kubenswrapper[4717]: I0218 12:09:49.897755 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fe12f9fb9d4dde05cef802a8198296506da1d47fc0825d6a1a658c68afac9f4" Feb 18 12:09:49 crc kubenswrapper[4717]: I0218 12:09:49.927704 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.005682 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.108535 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-config-data\") pod \"d7a56f42-d311-4ef0-af48-b3eb120ef805\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.108655 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm2db\" (UniqueName: \"kubernetes.io/projected/d7a56f42-d311-4ef0-af48-b3eb120ef805-kube-api-access-lm2db\") pod \"d7a56f42-d311-4ef0-af48-b3eb120ef805\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.108687 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-combined-ca-bundle\") pod \"d7a56f42-d311-4ef0-af48-b3eb120ef805\" (UID: \"d7a56f42-d311-4ef0-af48-b3eb120ef805\") " Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.119900 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a56f42-d311-4ef0-af48-b3eb120ef805-kube-api-access-lm2db" (OuterVolumeSpecName: "kube-api-access-lm2db") pod "d7a56f42-d311-4ef0-af48-b3eb120ef805" (UID: "d7a56f42-d311-4ef0-af48-b3eb120ef805"). InnerVolumeSpecName "kube-api-access-lm2db". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.146577 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-config-data" (OuterVolumeSpecName: "config-data") pod "d7a56f42-d311-4ef0-af48-b3eb120ef805" (UID: "d7a56f42-d311-4ef0-af48-b3eb120ef805"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.181002 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7a56f42-d311-4ef0-af48-b3eb120ef805" (UID: "d7a56f42-d311-4ef0-af48-b3eb120ef805"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.211644 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.211681 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm2db\" (UniqueName: \"kubernetes.io/projected/d7a56f42-d311-4ef0-af48-b3eb120ef805-kube-api-access-lm2db\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.211692 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a56f42-d311-4ef0-af48-b3eb120ef805-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.687422 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.822764 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-logs\") pod \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.822944 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpl8x\" (UniqueName: \"kubernetes.io/projected/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-kube-api-access-gpl8x\") pod \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.822994 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-combined-ca-bundle\") pod \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.823101 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-config-data\") pod \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\" (UID: \"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e\") " Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.825870 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-logs" (OuterVolumeSpecName: "logs") pod "20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" (UID: "20542147-8bf6-4a99-a07c-0d0ff9ee2c8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.828940 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-kube-api-access-gpl8x" (OuterVolumeSpecName: "kube-api-access-gpl8x") pod "20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" (UID: "20542147-8bf6-4a99-a07c-0d0ff9ee2c8e"). InnerVolumeSpecName "kube-api-access-gpl8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.851375 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" (UID: "20542147-8bf6-4a99-a07c-0d0ff9ee2c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.853781 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-config-data" (OuterVolumeSpecName: "config-data") pod "20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" (UID: "20542147-8bf6-4a99-a07c-0d0ff9ee2c8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.910721 4717 generic.go:334] "Generic (PLEG): container finished" podID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerID="8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d" exitCode=0 Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.911375 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.914089 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e","Type":"ContainerDied","Data":"8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d"} Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.916002 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20542147-8bf6-4a99-a07c-0d0ff9ee2c8e","Type":"ContainerDied","Data":"b397069c99d118063971471efd547687e2219302d15b0ad65e71a5fcf71ea8f9"} Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.916133 4717 scope.go:117] "RemoveContainer" containerID="8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.914119 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.926171 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.926212 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpl8x\" (UniqueName: \"kubernetes.io/projected/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-kube-api-access-gpl8x\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.926232 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.926245 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.947505 4717 scope.go:117] "RemoveContainer" containerID="543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.987356 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.992123 4717 scope.go:117] "RemoveContainer" containerID="8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d" Feb 18 12:09:50 crc kubenswrapper[4717]: E0218 12:09:50.998419 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d\": container with ID starting with 8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d not found: ID does not exist" containerID="8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.998489 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d"} err="failed to get container status \"8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d\": rpc error: code = NotFound desc = could not find container \"8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d\": container with ID starting with 8ebccb22596d97480d97dfe99dbfa714a45e25a7c9908727ec5320d7faae165d not found: ID does not exist" Feb 18 12:09:50 crc kubenswrapper[4717]: I0218 12:09:50.998525 4717 scope.go:117] "RemoveContainer" containerID="543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7" Feb 18 12:09:50 crc kubenswrapper[4717]: E0218 12:09:50.999920 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7\": container with ID starting with 543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7 not found: ID does not exist" containerID="543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:50.999958 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7"} err="failed to get container status \"543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7\": rpc error: code = NotFound desc = could not find container \"543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7\": container with ID starting with 543e463a84d80c9ec78063132f672f75e9852a30d334b2a9cef19c95054ac2d7 not found: ID does not exist" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.006611 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.029711 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.053830 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a56f42-d311-4ef0-af48-b3eb120ef805" path="/var/lib/kubelet/pods/d7a56f42-d311-4ef0-af48-b3eb120ef805/volumes" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.055042 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.055813 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:09:51 crc kubenswrapper[4717]: E0218 12:09:51.056380 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a56f42-d311-4ef0-af48-b3eb120ef805" containerName="nova-scheduler-scheduler" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.056401 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a56f42-d311-4ef0-af48-b3eb120ef805" containerName="nova-scheduler-scheduler" Feb 18 12:09:51 crc kubenswrapper[4717]: E0218 12:09:51.056430 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerName="nova-api-log" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.056438 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerName="nova-api-log" Feb 18 12:09:51 crc kubenswrapper[4717]: E0218 12:09:51.056449 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerName="nova-api-api" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.056455 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerName="nova-api-api" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.056708 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerName="nova-api-log" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.056760 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a56f42-d311-4ef0-af48-b3eb120ef805" containerName="nova-scheduler-scheduler" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.056780 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" containerName="nova-api-api" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.057820 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.065711 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.070523 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.072795 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.077984 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.084595 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.097819 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.234892 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-config-data\") pod \"nova-scheduler-0\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.235421 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.235471 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a005c821-9a23-46ed-9b7a-2cc11e7db64d-logs\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.235511 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.235553 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vd68\" (UniqueName: \"kubernetes.io/projected/a005c821-9a23-46ed-9b7a-2cc11e7db64d-kube-api-access-2vd68\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.235573 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-924nx\" (UniqueName: \"kubernetes.io/projected/01cb18f3-386e-419b-ac3f-42603ef1dac5-kube-api-access-924nx\") pod \"nova-scheduler-0\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.235597 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-config-data\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.337703 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.337773 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a005c821-9a23-46ed-9b7a-2cc11e7db64d-logs\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.337810 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.337842 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vd68\" (UniqueName: \"kubernetes.io/projected/a005c821-9a23-46ed-9b7a-2cc11e7db64d-kube-api-access-2vd68\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.337863 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-924nx\" (UniqueName: \"kubernetes.io/projected/01cb18f3-386e-419b-ac3f-42603ef1dac5-kube-api-access-924nx\") pod \"nova-scheduler-0\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.337883 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-config-data\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.337949 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-config-data\") pod \"nova-scheduler-0\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.339880 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a005c821-9a23-46ed-9b7a-2cc11e7db64d-logs\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.344944 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.347233 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.347248 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-config-data\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.358066 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-config-data\") pod \"nova-scheduler-0\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.359505 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vd68\" (UniqueName: \"kubernetes.io/projected/a005c821-9a23-46ed-9b7a-2cc11e7db64d-kube-api-access-2vd68\") pod \"nova-api-0\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.359731 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-924nx\" (UniqueName: \"kubernetes.io/projected/01cb18f3-386e-419b-ac3f-42603ef1dac5-kube-api-access-924nx\") pod \"nova-scheduler-0\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.389293 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.407847 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.898812 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:09:51 crc kubenswrapper[4717]: W0218 12:09:51.902993 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda005c821_9a23_46ed_9b7a_2cc11e7db64d.slice/crio-94fc8b96e3776ffb5a33f6b25c9cf8a9626ea7f1f7ae6910f8c1e1e5000f7a5f WatchSource:0}: Error finding container 94fc8b96e3776ffb5a33f6b25c9cf8a9626ea7f1f7ae6910f8c1e1e5000f7a5f: Status 404 returned error can't find the container with id 94fc8b96e3776ffb5a33f6b25c9cf8a9626ea7f1f7ae6910f8c1e1e5000f7a5f Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.962196 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a005c821-9a23-46ed-9b7a-2cc11e7db64d","Type":"ContainerStarted","Data":"94fc8b96e3776ffb5a33f6b25c9cf8a9626ea7f1f7ae6910f8c1e1e5000f7a5f"} Feb 18 12:09:51 crc kubenswrapper[4717]: I0218 12:09:51.996339 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:09:52 crc kubenswrapper[4717]: W0218 12:09:52.002823 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01cb18f3_386e_419b_ac3f_42603ef1dac5.slice/crio-90307c79a957683da019f9693da102aa4ec7f132d7f7fa09b9339db2f80b88c0 WatchSource:0}: Error finding container 90307c79a957683da019f9693da102aa4ec7f132d7f7fa09b9339db2f80b88c0: Status 404 returned error can't find the container with id 90307c79a957683da019f9693da102aa4ec7f132d7f7fa09b9339db2f80b88c0 Feb 18 12:09:52 crc kubenswrapper[4717]: I0218 12:09:52.287060 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 12:09:52 crc kubenswrapper[4717]: I0218 12:09:52.288369 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 12:09:52 crc kubenswrapper[4717]: I0218 12:09:52.994295 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a005c821-9a23-46ed-9b7a-2cc11e7db64d","Type":"ContainerStarted","Data":"73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101"} Feb 18 12:09:52 crc kubenswrapper[4717]: I0218 12:09:52.994735 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a005c821-9a23-46ed-9b7a-2cc11e7db64d","Type":"ContainerStarted","Data":"83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e"} Feb 18 12:09:52 crc kubenswrapper[4717]: I0218 12:09:52.996973 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01cb18f3-386e-419b-ac3f-42603ef1dac5","Type":"ContainerStarted","Data":"fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210"} Feb 18 12:09:52 crc kubenswrapper[4717]: I0218 12:09:52.997028 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01cb18f3-386e-419b-ac3f-42603ef1dac5","Type":"ContainerStarted","Data":"90307c79a957683da019f9693da102aa4ec7f132d7f7fa09b9339db2f80b88c0"} Feb 18 12:09:53 crc kubenswrapper[4717]: I0218 12:09:53.038633 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.038605198 podStartE2EDuration="3.038605198s" podCreationTimestamp="2026-02-18 12:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:53.026376185 +0000 UTC m=+1227.428477511" watchObservedRunningTime="2026-02-18 12:09:53.038605198 +0000 UTC m=+1227.440706514" Feb 18 12:09:53 crc kubenswrapper[4717]: I0218 12:09:53.053506 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.053479698 podStartE2EDuration="3.053479698s" podCreationTimestamp="2026-02-18 12:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:09:53.05286611 +0000 UTC m=+1227.454967426" watchObservedRunningTime="2026-02-18 12:09:53.053479698 +0000 UTC m=+1227.455581014" Feb 18 12:09:53 crc kubenswrapper[4717]: I0218 12:09:53.059218 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20542147-8bf6-4a99-a07c-0d0ff9ee2c8e" path="/var/lib/kubelet/pods/20542147-8bf6-4a99-a07c-0d0ff9ee2c8e/volumes" Feb 18 12:09:54 crc kubenswrapper[4717]: I0218 12:09:54.508821 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 12:09:54 crc kubenswrapper[4717]: I0218 12:09:54.509519 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4aa4314c-f7fd-4bad-909b-40cd26c1a377" containerName="kube-state-metrics" containerID="cri-o://8e848b6e53f0edfc455ea0d32f2bc5665ee1f6ba97202a4edfd327911137410a" gracePeriod=30 Feb 18 12:09:55 crc kubenswrapper[4717]: I0218 12:09:55.018047 4717 generic.go:334] "Generic (PLEG): container finished" podID="4aa4314c-f7fd-4bad-909b-40cd26c1a377" containerID="8e848b6e53f0edfc455ea0d32f2bc5665ee1f6ba97202a4edfd327911137410a" exitCode=2 Feb 18 12:09:55 crc kubenswrapper[4717]: I0218 12:09:55.018543 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4aa4314c-f7fd-4bad-909b-40cd26c1a377","Type":"ContainerDied","Data":"8e848b6e53f0edfc455ea0d32f2bc5665ee1f6ba97202a4edfd327911137410a"} Feb 18 12:09:55 crc kubenswrapper[4717]: I0218 12:09:55.018584 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4aa4314c-f7fd-4bad-909b-40cd26c1a377","Type":"ContainerDied","Data":"c70cf53d59dfe14d2a71c6ec916c0864986647bcdd474ef38f9fbcd582cf6eb4"} Feb 18 12:09:55 crc kubenswrapper[4717]: I0218 12:09:55.018598 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c70cf53d59dfe14d2a71c6ec916c0864986647bcdd474ef38f9fbcd582cf6eb4" Feb 18 12:09:55 crc kubenswrapper[4717]: I0218 12:09:55.103620 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 12:09:55 crc kubenswrapper[4717]: I0218 12:09:55.250577 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdj56\" (UniqueName: \"kubernetes.io/projected/4aa4314c-f7fd-4bad-909b-40cd26c1a377-kube-api-access-tdj56\") pod \"4aa4314c-f7fd-4bad-909b-40cd26c1a377\" (UID: \"4aa4314c-f7fd-4bad-909b-40cd26c1a377\") " Feb 18 12:09:55 crc kubenswrapper[4717]: I0218 12:09:55.267558 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa4314c-f7fd-4bad-909b-40cd26c1a377-kube-api-access-tdj56" (OuterVolumeSpecName: "kube-api-access-tdj56") pod "4aa4314c-f7fd-4bad-909b-40cd26c1a377" (UID: "4aa4314c-f7fd-4bad-909b-40cd26c1a377"). InnerVolumeSpecName "kube-api-access-tdj56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:09:55 crc kubenswrapper[4717]: I0218 12:09:55.354297 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdj56\" (UniqueName: \"kubernetes.io/projected/4aa4314c-f7fd-4bad-909b-40cd26c1a377-kube-api-access-tdj56\") on node \"crc\" DevicePath \"\"" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.060173 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.105560 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.127752 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.137431 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 12:09:56 crc kubenswrapper[4717]: E0218 12:09:56.138094 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa4314c-f7fd-4bad-909b-40cd26c1a377" containerName="kube-state-metrics" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.138119 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa4314c-f7fd-4bad-909b-40cd26c1a377" containerName="kube-state-metrics" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.138364 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa4314c-f7fd-4bad-909b-40cd26c1a377" containerName="kube-state-metrics" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.139437 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.145508 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.145724 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.165826 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.268983 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.277318 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnbsv\" (UniqueName: \"kubernetes.io/projected/71b7862b-19b8-4921-955c-4948b428f4eb-kube-api-access-lnbsv\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.277412 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b7862b-19b8-4921-955c-4948b428f4eb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.277474 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b7862b-19b8-4921-955c-4948b428f4eb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.277512 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/71b7862b-19b8-4921-955c-4948b428f4eb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.379285 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnbsv\" (UniqueName: \"kubernetes.io/projected/71b7862b-19b8-4921-955c-4948b428f4eb-kube-api-access-lnbsv\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.379341 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b7862b-19b8-4921-955c-4948b428f4eb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.379404 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b7862b-19b8-4921-955c-4948b428f4eb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.379446 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/71b7862b-19b8-4921-955c-4948b428f4eb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.386071 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/71b7862b-19b8-4921-955c-4948b428f4eb-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.389545 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.391311 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b7862b-19b8-4921-955c-4948b428f4eb-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.405342 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b7862b-19b8-4921-955c-4948b428f4eb-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.417879 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnbsv\" (UniqueName: \"kubernetes.io/projected/71b7862b-19b8-4921-955c-4948b428f4eb-kube-api-access-lnbsv\") pod \"kube-state-metrics-0\" (UID: \"71b7862b-19b8-4921-955c-4948b428f4eb\") " pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.419866 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.421723 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="ceilometer-central-agent" containerID="cri-o://615419bb075b7895d127e13b128bdd34b3e0e9912fd72550547dfb5ebdeb62ce" gracePeriod=30 Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.421824 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="proxy-httpd" containerID="cri-o://0da26040a548ecc2990eed828e71e9a9e0d7cb330471081b0cb2f5312e197b27" gracePeriod=30 Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.421976 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="sg-core" containerID="cri-o://f3dc751c957bf94e042d04733d25d5fb55dcaf4216754dddb60589f019a5ccf3" gracePeriod=30 Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.422044 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="ceilometer-notification-agent" containerID="cri-o://d9dd14757e2aa7f7ab2a620968e85e352216b3a8c858ee1cda2dcf59943b6a29" gracePeriod=30 Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.465368 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.987717 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 12:09:56 crc kubenswrapper[4717]: W0218 12:09:56.993035 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71b7862b_19b8_4921_955c_4948b428f4eb.slice/crio-b02bdc4cb90978e9d5a6c3ed356d99ec0746e4a6ec4f9b6f6e0ed3a52ba1773d WatchSource:0}: Error finding container b02bdc4cb90978e9d5a6c3ed356d99ec0746e4a6ec4f9b6f6e0ed3a52ba1773d: Status 404 returned error can't find the container with id b02bdc4cb90978e9d5a6c3ed356d99ec0746e4a6ec4f9b6f6e0ed3a52ba1773d Feb 18 12:09:56 crc kubenswrapper[4717]: I0218 12:09:56.997284 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:09:57 crc kubenswrapper[4717]: I0218 12:09:57.048876 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa4314c-f7fd-4bad-909b-40cd26c1a377" path="/var/lib/kubelet/pods/4aa4314c-f7fd-4bad-909b-40cd26c1a377/volumes" Feb 18 12:09:57 crc kubenswrapper[4717]: I0218 12:09:57.073793 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"71b7862b-19b8-4921-955c-4948b428f4eb","Type":"ContainerStarted","Data":"b02bdc4cb90978e9d5a6c3ed356d99ec0746e4a6ec4f9b6f6e0ed3a52ba1773d"} Feb 18 12:09:57 crc kubenswrapper[4717]: I0218 12:09:57.077847 4717 generic.go:334] "Generic (PLEG): container finished" podID="b15688e6-5868-4aea-94fe-377241de4120" containerID="0da26040a548ecc2990eed828e71e9a9e0d7cb330471081b0cb2f5312e197b27" exitCode=0 Feb 18 12:09:57 crc kubenswrapper[4717]: I0218 12:09:57.077899 4717 generic.go:334] "Generic (PLEG): container finished" podID="b15688e6-5868-4aea-94fe-377241de4120" containerID="f3dc751c957bf94e042d04733d25d5fb55dcaf4216754dddb60589f019a5ccf3" exitCode=2 Feb 18 12:09:57 crc kubenswrapper[4717]: I0218 12:09:57.077912 4717 generic.go:334] "Generic (PLEG): container finished" podID="b15688e6-5868-4aea-94fe-377241de4120" containerID="615419bb075b7895d127e13b128bdd34b3e0e9912fd72550547dfb5ebdeb62ce" exitCode=0 Feb 18 12:09:57 crc kubenswrapper[4717]: I0218 12:09:57.077928 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15688e6-5868-4aea-94fe-377241de4120","Type":"ContainerDied","Data":"0da26040a548ecc2990eed828e71e9a9e0d7cb330471081b0cb2f5312e197b27"} Feb 18 12:09:57 crc kubenswrapper[4717]: I0218 12:09:57.077990 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15688e6-5868-4aea-94fe-377241de4120","Type":"ContainerDied","Data":"f3dc751c957bf94e042d04733d25d5fb55dcaf4216754dddb60589f019a5ccf3"} Feb 18 12:09:57 crc kubenswrapper[4717]: I0218 12:09:57.078003 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15688e6-5868-4aea-94fe-377241de4120","Type":"ContainerDied","Data":"615419bb075b7895d127e13b128bdd34b3e0e9912fd72550547dfb5ebdeb62ce"} Feb 18 12:09:57 crc kubenswrapper[4717]: I0218 12:09:57.286834 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 12:09:57 crc kubenswrapper[4717]: I0218 12:09:57.286991 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 12:09:58 crc kubenswrapper[4717]: I0218 12:09:58.299733 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 12:09:58 crc kubenswrapper[4717]: I0218 12:09:58.299733 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 12:09:59 crc kubenswrapper[4717]: I0218 12:09:59.105449 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"71b7862b-19b8-4921-955c-4948b428f4eb","Type":"ContainerStarted","Data":"b1d1296110b723e3a3cf70ec2b25a7830a90f25b92a5e311d356a5966d6c617c"} Feb 18 12:09:59 crc kubenswrapper[4717]: I0218 12:09:59.106161 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 12:09:59 crc kubenswrapper[4717]: I0218 12:09:59.135193 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.2906511849999998 podStartE2EDuration="3.135170774s" podCreationTimestamp="2026-02-18 12:09:56 +0000 UTC" firstStartedPulling="2026-02-18 12:09:56.997044526 +0000 UTC m=+1231.399145842" lastFinishedPulling="2026-02-18 12:09:57.841564115 +0000 UTC m=+1232.243665431" observedRunningTime="2026-02-18 12:09:59.123524878 +0000 UTC m=+1233.525626194" watchObservedRunningTime="2026-02-18 12:09:59.135170774 +0000 UTC m=+1233.537272090" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.122426 4717 generic.go:334] "Generic (PLEG): container finished" podID="b15688e6-5868-4aea-94fe-377241de4120" containerID="d9dd14757e2aa7f7ab2a620968e85e352216b3a8c858ee1cda2dcf59943b6a29" exitCode=0 Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.125588 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15688e6-5868-4aea-94fe-377241de4120","Type":"ContainerDied","Data":"d9dd14757e2aa7f7ab2a620968e85e352216b3a8c858ee1cda2dcf59943b6a29"} Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.223289 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.290383 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-sg-core-conf-yaml\") pod \"b15688e6-5868-4aea-94fe-377241de4120\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.290456 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-run-httpd\") pod \"b15688e6-5868-4aea-94fe-377241de4120\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.290586 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-combined-ca-bundle\") pod \"b15688e6-5868-4aea-94fe-377241de4120\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.290688 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h9f8\" (UniqueName: \"kubernetes.io/projected/b15688e6-5868-4aea-94fe-377241de4120-kube-api-access-5h9f8\") pod \"b15688e6-5868-4aea-94fe-377241de4120\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.290722 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-scripts\") pod \"b15688e6-5868-4aea-94fe-377241de4120\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.290740 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-log-httpd\") pod \"b15688e6-5868-4aea-94fe-377241de4120\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.290851 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-config-data\") pod \"b15688e6-5868-4aea-94fe-377241de4120\" (UID: \"b15688e6-5868-4aea-94fe-377241de4120\") " Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.291058 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b15688e6-5868-4aea-94fe-377241de4120" (UID: "b15688e6-5868-4aea-94fe-377241de4120"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.291333 4717 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.292893 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b15688e6-5868-4aea-94fe-377241de4120" (UID: "b15688e6-5868-4aea-94fe-377241de4120"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.309602 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15688e6-5868-4aea-94fe-377241de4120-kube-api-access-5h9f8" (OuterVolumeSpecName: "kube-api-access-5h9f8") pod "b15688e6-5868-4aea-94fe-377241de4120" (UID: "b15688e6-5868-4aea-94fe-377241de4120"). InnerVolumeSpecName "kube-api-access-5h9f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.317523 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-scripts" (OuterVolumeSpecName: "scripts") pod "b15688e6-5868-4aea-94fe-377241de4120" (UID: "b15688e6-5868-4aea-94fe-377241de4120"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.333446 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b15688e6-5868-4aea-94fe-377241de4120" (UID: "b15688e6-5868-4aea-94fe-377241de4120"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.392862 4717 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.393304 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h9f8\" (UniqueName: \"kubernetes.io/projected/b15688e6-5868-4aea-94fe-377241de4120-kube-api-access-5h9f8\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.393414 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.393534 4717 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15688e6-5868-4aea-94fe-377241de4120-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.402482 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b15688e6-5868-4aea-94fe-377241de4120" (UID: "b15688e6-5868-4aea-94fe-377241de4120"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.427283 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-config-data" (OuterVolumeSpecName: "config-data") pod "b15688e6-5868-4aea-94fe-377241de4120" (UID: "b15688e6-5868-4aea-94fe-377241de4120"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.496658 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:00 crc kubenswrapper[4717]: I0218 12:10:00.496717 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15688e6-5868-4aea-94fe-377241de4120-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.136345 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15688e6-5868-4aea-94fe-377241de4120","Type":"ContainerDied","Data":"3e993bd71685ae17ce8a259f3bc9e7f7413e24a85b5996ff39556495834a2f86"} Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.136760 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.136775 4717 scope.go:117] "RemoveContainer" containerID="0da26040a548ecc2990eed828e71e9a9e0d7cb330471081b0cb2f5312e197b27" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.165383 4717 scope.go:117] "RemoveContainer" containerID="f3dc751c957bf94e042d04733d25d5fb55dcaf4216754dddb60589f019a5ccf3" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.168471 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.178576 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.199692 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.199843 4717 scope.go:117] "RemoveContainer" containerID="d9dd14757e2aa7f7ab2a620968e85e352216b3a8c858ee1cda2dcf59943b6a29" Feb 18 12:10:01 crc kubenswrapper[4717]: E0218 12:10:01.200204 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="ceilometer-central-agent" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.200221 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="ceilometer-central-agent" Feb 18 12:10:01 crc kubenswrapper[4717]: E0218 12:10:01.200237 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="proxy-httpd" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.200244 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="proxy-httpd" Feb 18 12:10:01 crc kubenswrapper[4717]: E0218 12:10:01.200271 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="ceilometer-notification-agent" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.200277 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="ceilometer-notification-agent" Feb 18 12:10:01 crc kubenswrapper[4717]: E0218 12:10:01.200287 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="sg-core" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.200293 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="sg-core" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.200476 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="ceilometer-central-agent" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.200490 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="proxy-httpd" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.200502 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="ceilometer-notification-agent" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.200513 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15688e6-5868-4aea-94fe-377241de4120" containerName="sg-core" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.202443 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.205306 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.206689 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.206868 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.211095 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-run-httpd\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.211133 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw595\" (UniqueName: \"kubernetes.io/projected/ded17957-de76-4763-9646-b78cf28cad08-kube-api-access-tw595\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.211169 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.211197 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.211248 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.211284 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-config-data\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.211305 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-log-httpd\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.211348 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-scripts\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.228699 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.239163 4717 scope.go:117] "RemoveContainer" containerID="615419bb075b7895d127e13b128bdd34b3e0e9912fd72550547dfb5ebdeb62ce" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.312923 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-scripts\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.313123 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-run-httpd\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.313162 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw595\" (UniqueName: \"kubernetes.io/projected/ded17957-de76-4763-9646-b78cf28cad08-kube-api-access-tw595\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.313244 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.313292 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.313341 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.313362 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-config-data\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.313393 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-log-httpd\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.314775 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-log-httpd\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.314833 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-run-httpd\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.320464 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-scripts\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.320736 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.321857 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.322798 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.322854 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-config-data\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.335213 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw595\" (UniqueName: \"kubernetes.io/projected/ded17957-de76-4763-9646-b78cf28cad08-kube-api-access-tw595\") pod \"ceilometer-0\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " pod="openstack/ceilometer-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.389454 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.409956 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.409999 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.430813 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 12:10:01 crc kubenswrapper[4717]: I0218 12:10:01.526101 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:10:02 crc kubenswrapper[4717]: I0218 12:10:02.014788 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:02 crc kubenswrapper[4717]: I0218 12:10:02.147160 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ded17957-de76-4763-9646-b78cf28cad08","Type":"ContainerStarted","Data":"040a7dc2350fed58a174fd40e683d199292b4ef0d5a183d7bb1e7694b7879bae"} Feb 18 12:10:02 crc kubenswrapper[4717]: I0218 12:10:02.184653 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 12:10:02 crc kubenswrapper[4717]: I0218 12:10:02.495518 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 12:10:02 crc kubenswrapper[4717]: I0218 12:10:02.495525 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 12:10:03 crc kubenswrapper[4717]: I0218 12:10:03.048531 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15688e6-5868-4aea-94fe-377241de4120" path="/var/lib/kubelet/pods/b15688e6-5868-4aea-94fe-377241de4120/volumes" Feb 18 12:10:03 crc kubenswrapper[4717]: I0218 12:10:03.160771 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ded17957-de76-4763-9646-b78cf28cad08","Type":"ContainerStarted","Data":"099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957"} Feb 18 12:10:04 crc kubenswrapper[4717]: I0218 12:10:04.174827 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ded17957-de76-4763-9646-b78cf28cad08","Type":"ContainerStarted","Data":"7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b"} Feb 18 12:10:05 crc kubenswrapper[4717]: I0218 12:10:05.188416 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ded17957-de76-4763-9646-b78cf28cad08","Type":"ContainerStarted","Data":"e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c"} Feb 18 12:10:06 crc kubenswrapper[4717]: I0218 12:10:06.478074 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 12:10:07 crc kubenswrapper[4717]: I0218 12:10:07.295874 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 12:10:07 crc kubenswrapper[4717]: I0218 12:10:07.296582 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 12:10:07 crc kubenswrapper[4717]: I0218 12:10:07.304678 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 12:10:08 crc kubenswrapper[4717]: I0218 12:10:08.237011 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ded17957-de76-4763-9646-b78cf28cad08","Type":"ContainerStarted","Data":"c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921"} Feb 18 12:10:08 crc kubenswrapper[4717]: I0218 12:10:08.242093 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 12:10:08 crc kubenswrapper[4717]: I0218 12:10:08.301637 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.068041563 podStartE2EDuration="7.301607206s" podCreationTimestamp="2026-02-18 12:10:01 +0000 UTC" firstStartedPulling="2026-02-18 12:10:02.023294952 +0000 UTC m=+1236.425396268" lastFinishedPulling="2026-02-18 12:10:07.256860595 +0000 UTC m=+1241.658961911" observedRunningTime="2026-02-18 12:10:08.270749434 +0000 UTC m=+1242.672850750" watchObservedRunningTime="2026-02-18 12:10:08.301607206 +0000 UTC m=+1242.703708522" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.227951 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.256099 4717 generic.go:334] "Generic (PLEG): container finished" podID="cce21079-f0b5-427e-a5f8-ba58efbfed27" containerID="c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad" exitCode=137 Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.256899 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.257395 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cce21079-f0b5-427e-a5f8-ba58efbfed27","Type":"ContainerDied","Data":"c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad"} Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.257433 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.257447 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cce21079-f0b5-427e-a5f8-ba58efbfed27","Type":"ContainerDied","Data":"35a282134db5a2affc2c8a01431cd2767e67fc62f6284538215b1a005048519f"} Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.257956 4717 scope.go:117] "RemoveContainer" containerID="c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.294186 4717 scope.go:117] "RemoveContainer" containerID="c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad" Feb 18 12:10:09 crc kubenswrapper[4717]: E0218 12:10:09.295307 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad\": container with ID starting with c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad not found: ID does not exist" containerID="c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.295378 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad"} err="failed to get container status \"c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad\": rpc error: code = NotFound desc = could not find container \"c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad\": container with ID starting with c62c627c132e1c7f7674e36c455f2f4601c1285b7f923f0ca93fc7ea613795ad not found: ID does not exist" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.334074 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-combined-ca-bundle\") pod \"cce21079-f0b5-427e-a5f8-ba58efbfed27\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.334247 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-config-data\") pod \"cce21079-f0b5-427e-a5f8-ba58efbfed27\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.334549 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbk84\" (UniqueName: \"kubernetes.io/projected/cce21079-f0b5-427e-a5f8-ba58efbfed27-kube-api-access-dbk84\") pod \"cce21079-f0b5-427e-a5f8-ba58efbfed27\" (UID: \"cce21079-f0b5-427e-a5f8-ba58efbfed27\") " Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.341314 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce21079-f0b5-427e-a5f8-ba58efbfed27-kube-api-access-dbk84" (OuterVolumeSpecName: "kube-api-access-dbk84") pod "cce21079-f0b5-427e-a5f8-ba58efbfed27" (UID: "cce21079-f0b5-427e-a5f8-ba58efbfed27"). InnerVolumeSpecName "kube-api-access-dbk84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.370503 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-config-data" (OuterVolumeSpecName: "config-data") pod "cce21079-f0b5-427e-a5f8-ba58efbfed27" (UID: "cce21079-f0b5-427e-a5f8-ba58efbfed27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.370638 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cce21079-f0b5-427e-a5f8-ba58efbfed27" (UID: "cce21079-f0b5-427e-a5f8-ba58efbfed27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.437612 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.437647 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbk84\" (UniqueName: \"kubernetes.io/projected/cce21079-f0b5-427e-a5f8-ba58efbfed27-kube-api-access-dbk84\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.437660 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce21079-f0b5-427e-a5f8-ba58efbfed27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.607129 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.625889 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.644220 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 12:10:09 crc kubenswrapper[4717]: E0218 12:10:09.644902 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce21079-f0b5-427e-a5f8-ba58efbfed27" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.644933 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce21079-f0b5-427e-a5f8-ba58efbfed27" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.645241 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce21079-f0b5-427e-a5f8-ba58efbfed27" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.646649 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.649476 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.649943 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.655249 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.655842 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.744246 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.744340 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.744416 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.744496 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.744564 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvgs9\" (UniqueName: \"kubernetes.io/projected/c95e107a-542e-4a31-98f9-aed639f1fc42-kube-api-access-fvgs9\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.847143 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.847781 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.847818 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.847950 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.848082 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvgs9\" (UniqueName: \"kubernetes.io/projected/c95e107a-542e-4a31-98f9-aed639f1fc42-kube-api-access-fvgs9\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.852945 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.853033 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.853717 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.854929 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95e107a-542e-4a31-98f9-aed639f1fc42-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.882270 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvgs9\" (UniqueName: \"kubernetes.io/projected/c95e107a-542e-4a31-98f9-aed639f1fc42-kube-api-access-fvgs9\") pod \"nova-cell1-novncproxy-0\" (UID: \"c95e107a-542e-4a31-98f9-aed639f1fc42\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:09 crc kubenswrapper[4717]: I0218 12:10:09.976654 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:10 crc kubenswrapper[4717]: I0218 12:10:10.500578 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.058056 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce21079-f0b5-427e-a5f8-ba58efbfed27" path="/var/lib/kubelet/pods/cce21079-f0b5-427e-a5f8-ba58efbfed27/volumes" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.287308 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c95e107a-542e-4a31-98f9-aed639f1fc42","Type":"ContainerStarted","Data":"084b682fa7141b84d9f04e794ebd2ff8ffc95cd5aa87217ea34603b6c5c613d5"} Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.287375 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c95e107a-542e-4a31-98f9-aed639f1fc42","Type":"ContainerStarted","Data":"c525219fb83da4f44ddbdb94f8b2adcec19fba9eeb7cbfb9513ce146752905e2"} Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.310836 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.3108117200000002 podStartE2EDuration="2.31081172s" podCreationTimestamp="2026-02-18 12:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:10:11.30769768 +0000 UTC m=+1245.709799016" watchObservedRunningTime="2026-02-18 12:10:11.31081172 +0000 UTC m=+1245.712913036" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.419665 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.419766 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.420921 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.420985 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.425496 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.427028 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.647815 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-fbdml"] Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.651239 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.672410 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-fbdml"] Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.795918 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.796026 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-config\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.796060 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkjl\" (UniqueName: \"kubernetes.io/projected/a1e157b4-5706-45aa-b5bf-9c5bd109b501-kube-api-access-khkjl\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.796090 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.796113 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.796138 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.898167 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-config\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.898714 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khkjl\" (UniqueName: \"kubernetes.io/projected/a1e157b4-5706-45aa-b5bf-9c5bd109b501-kube-api-access-khkjl\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.898761 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.898784 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.899447 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.901346 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.901452 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-config\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.901492 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.901770 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.905423 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.919885 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.925922 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khkjl\" (UniqueName: \"kubernetes.io/projected/a1e157b4-5706-45aa-b5bf-9c5bd109b501-kube-api-access-khkjl\") pod \"dnsmasq-dns-89c5cd4d5-fbdml\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:11 crc kubenswrapper[4717]: I0218 12:10:11.984881 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:12 crc kubenswrapper[4717]: I0218 12:10:12.491134 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-fbdml"] Feb 18 12:10:12 crc kubenswrapper[4717]: W0218 12:10:12.501015 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1e157b4_5706_45aa_b5bf_9c5bd109b501.slice/crio-50406ac86752084b324fc33d64e1893255ae06ed5e1c3873539ffedb54b3b8ce WatchSource:0}: Error finding container 50406ac86752084b324fc33d64e1893255ae06ed5e1c3873539ffedb54b3b8ce: Status 404 returned error can't find the container with id 50406ac86752084b324fc33d64e1893255ae06ed5e1c3873539ffedb54b3b8ce Feb 18 12:10:12 crc kubenswrapper[4717]: I0218 12:10:12.773787 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:10:12 crc kubenswrapper[4717]: I0218 12:10:12.774306 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:10:13 crc kubenswrapper[4717]: I0218 12:10:13.309551 4717 generic.go:334] "Generic (PLEG): container finished" podID="a1e157b4-5706-45aa-b5bf-9c5bd109b501" containerID="d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd" exitCode=0 Feb 18 12:10:13 crc kubenswrapper[4717]: I0218 12:10:13.310126 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" event={"ID":"a1e157b4-5706-45aa-b5bf-9c5bd109b501","Type":"ContainerDied","Data":"d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd"} Feb 18 12:10:13 crc kubenswrapper[4717]: I0218 12:10:13.310171 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" event={"ID":"a1e157b4-5706-45aa-b5bf-9c5bd109b501","Type":"ContainerStarted","Data":"50406ac86752084b324fc33d64e1893255ae06ed5e1c3873539ffedb54b3b8ce"} Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.087321 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.322215 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" event={"ID":"a1e157b4-5706-45aa-b5bf-9c5bd109b501","Type":"ContainerStarted","Data":"7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330"} Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.322374 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerName="nova-api-log" containerID="cri-o://83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e" gracePeriod=30 Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.322428 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerName="nova-api-api" containerID="cri-o://73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101" gracePeriod=30 Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.355518 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" podStartSLOduration=3.355485239 podStartE2EDuration="3.355485239s" podCreationTimestamp="2026-02-18 12:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:10:14.349713232 +0000 UTC m=+1248.751814548" watchObservedRunningTime="2026-02-18 12:10:14.355485239 +0000 UTC m=+1248.757586555" Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.651248 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.652069 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="ceilometer-central-agent" containerID="cri-o://099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957" gracePeriod=30 Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.652238 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="sg-core" containerID="cri-o://e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c" gracePeriod=30 Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.652305 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="proxy-httpd" containerID="cri-o://c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921" gracePeriod=30 Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.652501 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="ceilometer-notification-agent" containerID="cri-o://7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b" gracePeriod=30 Feb 18 12:10:14 crc kubenswrapper[4717]: I0218 12:10:14.977794 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:15 crc kubenswrapper[4717]: I0218 12:10:15.333459 4717 generic.go:334] "Generic (PLEG): container finished" podID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerID="83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e" exitCode=143 Feb 18 12:10:15 crc kubenswrapper[4717]: I0218 12:10:15.333557 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a005c821-9a23-46ed-9b7a-2cc11e7db64d","Type":"ContainerDied","Data":"83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e"} Feb 18 12:10:15 crc kubenswrapper[4717]: I0218 12:10:15.336280 4717 generic.go:334] "Generic (PLEG): container finished" podID="ded17957-de76-4763-9646-b78cf28cad08" containerID="c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921" exitCode=0 Feb 18 12:10:15 crc kubenswrapper[4717]: I0218 12:10:15.336314 4717 generic.go:334] "Generic (PLEG): container finished" podID="ded17957-de76-4763-9646-b78cf28cad08" containerID="e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c" exitCode=2 Feb 18 12:10:15 crc kubenswrapper[4717]: I0218 12:10:15.336300 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ded17957-de76-4763-9646-b78cf28cad08","Type":"ContainerDied","Data":"c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921"} Feb 18 12:10:15 crc kubenswrapper[4717]: I0218 12:10:15.336362 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ded17957-de76-4763-9646-b78cf28cad08","Type":"ContainerDied","Data":"e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c"} Feb 18 12:10:15 crc kubenswrapper[4717]: I0218 12:10:15.336377 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ded17957-de76-4763-9646-b78cf28cad08","Type":"ContainerDied","Data":"099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957"} Feb 18 12:10:15 crc kubenswrapper[4717]: I0218 12:10:15.336323 4717 generic.go:334] "Generic (PLEG): container finished" podID="ded17957-de76-4763-9646-b78cf28cad08" containerID="099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957" exitCode=0 Feb 18 12:10:15 crc kubenswrapper[4717]: I0218 12:10:15.336723 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.086089 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.232161 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-sg-core-conf-yaml\") pod \"ded17957-de76-4763-9646-b78cf28cad08\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.232445 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-config-data\") pod \"ded17957-de76-4763-9646-b78cf28cad08\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.232565 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-ceilometer-tls-certs\") pod \"ded17957-de76-4763-9646-b78cf28cad08\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.232664 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-combined-ca-bundle\") pod \"ded17957-de76-4763-9646-b78cf28cad08\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.232697 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw595\" (UniqueName: \"kubernetes.io/projected/ded17957-de76-4763-9646-b78cf28cad08-kube-api-access-tw595\") pod \"ded17957-de76-4763-9646-b78cf28cad08\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.232752 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-scripts\") pod \"ded17957-de76-4763-9646-b78cf28cad08\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.232780 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-log-httpd\") pod \"ded17957-de76-4763-9646-b78cf28cad08\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.232802 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-run-httpd\") pod \"ded17957-de76-4763-9646-b78cf28cad08\" (UID: \"ded17957-de76-4763-9646-b78cf28cad08\") " Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.233804 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ded17957-de76-4763-9646-b78cf28cad08" (UID: "ded17957-de76-4763-9646-b78cf28cad08"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.233977 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ded17957-de76-4763-9646-b78cf28cad08" (UID: "ded17957-de76-4763-9646-b78cf28cad08"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.240359 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded17957-de76-4763-9646-b78cf28cad08-kube-api-access-tw595" (OuterVolumeSpecName: "kube-api-access-tw595") pod "ded17957-de76-4763-9646-b78cf28cad08" (UID: "ded17957-de76-4763-9646-b78cf28cad08"). InnerVolumeSpecName "kube-api-access-tw595". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.241511 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-scripts" (OuterVolumeSpecName: "scripts") pod "ded17957-de76-4763-9646-b78cf28cad08" (UID: "ded17957-de76-4763-9646-b78cf28cad08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.268179 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ded17957-de76-4763-9646-b78cf28cad08" (UID: "ded17957-de76-4763-9646-b78cf28cad08"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.294085 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ded17957-de76-4763-9646-b78cf28cad08" (UID: "ded17957-de76-4763-9646-b78cf28cad08"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.325629 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ded17957-de76-4763-9646-b78cf28cad08" (UID: "ded17957-de76-4763-9646-b78cf28cad08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.335730 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.335769 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw595\" (UniqueName: \"kubernetes.io/projected/ded17957-de76-4763-9646-b78cf28cad08-kube-api-access-tw595\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.335784 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.335793 4717 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.335804 4717 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ded17957-de76-4763-9646-b78cf28cad08-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.335813 4717 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.335821 4717 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.391925 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-config-data" (OuterVolumeSpecName: "config-data") pod "ded17957-de76-4763-9646-b78cf28cad08" (UID: "ded17957-de76-4763-9646-b78cf28cad08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.400368 4717 generic.go:334] "Generic (PLEG): container finished" podID="ded17957-de76-4763-9646-b78cf28cad08" containerID="7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b" exitCode=0 Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.400417 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ded17957-de76-4763-9646-b78cf28cad08","Type":"ContainerDied","Data":"7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b"} Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.400458 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.400486 4717 scope.go:117] "RemoveContainer" containerID="c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.400470 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ded17957-de76-4763-9646-b78cf28cad08","Type":"ContainerDied","Data":"040a7dc2350fed58a174fd40e683d199292b4ef0d5a183d7bb1e7694b7879bae"} Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.422208 4717 scope.go:117] "RemoveContainer" containerID="e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.438789 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded17957-de76-4763-9646-b78cf28cad08-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.444680 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.461129 4717 scope.go:117] "RemoveContainer" containerID="7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.467632 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.482683 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:17 crc kubenswrapper[4717]: E0218 12:10:17.483545 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="proxy-httpd" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.483572 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="proxy-httpd" Feb 18 12:10:17 crc kubenswrapper[4717]: E0218 12:10:17.483587 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="sg-core" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.483593 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="sg-core" Feb 18 12:10:17 crc kubenswrapper[4717]: E0218 12:10:17.483622 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="ceilometer-central-agent" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.483630 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="ceilometer-central-agent" Feb 18 12:10:17 crc kubenswrapper[4717]: E0218 12:10:17.483653 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="ceilometer-notification-agent" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.483661 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="ceilometer-notification-agent" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.483886 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="ceilometer-notification-agent" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.483901 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="ceilometer-central-agent" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.483911 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="proxy-httpd" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.483927 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded17957-de76-4763-9646-b78cf28cad08" containerName="sg-core" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.495655 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.495522 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.500105 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.500104 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.500389 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.502576 4717 scope.go:117] "RemoveContainer" containerID="099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.582643 4717 scope.go:117] "RemoveContainer" containerID="c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921" Feb 18 12:10:17 crc kubenswrapper[4717]: E0218 12:10:17.583338 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921\": container with ID starting with c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921 not found: ID does not exist" containerID="c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.583375 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921"} err="failed to get container status \"c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921\": rpc error: code = NotFound desc = could not find container \"c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921\": container with ID starting with c9c3e280d15e9815add2b5fdc97c2d41bb27e5bd4567f04864436ad1f2245921 not found: ID does not exist" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.583430 4717 scope.go:117] "RemoveContainer" containerID="e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c" Feb 18 12:10:17 crc kubenswrapper[4717]: E0218 12:10:17.583767 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c\": container with ID starting with e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c not found: ID does not exist" containerID="e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.583789 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c"} err="failed to get container status \"e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c\": rpc error: code = NotFound desc = could not find container \"e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c\": container with ID starting with e5c020f94530000ec595baba36413d337e47a9521803d29ba48103c143aa300c not found: ID does not exist" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.583804 4717 scope.go:117] "RemoveContainer" containerID="7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b" Feb 18 12:10:17 crc kubenswrapper[4717]: E0218 12:10:17.584018 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b\": container with ID starting with 7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b not found: ID does not exist" containerID="7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.584035 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b"} err="failed to get container status \"7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b\": rpc error: code = NotFound desc = could not find container \"7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b\": container with ID starting with 7fbd1492b669fe1608414fcce86028379fd49e83bcd39bd9328c808888bec07b not found: ID does not exist" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.584048 4717 scope.go:117] "RemoveContainer" containerID="099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957" Feb 18 12:10:17 crc kubenswrapper[4717]: E0218 12:10:17.584247 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957\": container with ID starting with 099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957 not found: ID does not exist" containerID="099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.584303 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957"} err="failed to get container status \"099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957\": rpc error: code = NotFound desc = could not find container \"099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957\": container with ID starting with 099e59a85d911afdce734c75f42216345ef237e7dfb9c955d2f6af462f167957 not found: ID does not exist" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.643461 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73da3578-c044-4462-9bdc-0985effde3bf-run-httpd\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.643532 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-scripts\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.643574 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73da3578-c044-4462-9bdc-0985effde3bf-log-httpd\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.643690 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.643733 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.643813 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqvbn\" (UniqueName: \"kubernetes.io/projected/73da3578-c044-4462-9bdc-0985effde3bf-kube-api-access-wqvbn\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.643856 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.643882 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-config-data\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.746205 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73da3578-c044-4462-9bdc-0985effde3bf-run-httpd\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.746786 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-scripts\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.746816 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73da3578-c044-4462-9bdc-0985effde3bf-log-httpd\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.746862 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.746934 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.747010 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqvbn\" (UniqueName: \"kubernetes.io/projected/73da3578-c044-4462-9bdc-0985effde3bf-kube-api-access-wqvbn\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.747081 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.747112 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-config-data\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.747906 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73da3578-c044-4462-9bdc-0985effde3bf-log-httpd\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.748710 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/73da3578-c044-4462-9bdc-0985effde3bf-run-httpd\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.754691 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.755599 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-scripts\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.756071 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-config-data\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.756449 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.773082 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqvbn\" (UniqueName: \"kubernetes.io/projected/73da3578-c044-4462-9bdc-0985effde3bf-kube-api-access-wqvbn\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.775668 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/73da3578-c044-4462-9bdc-0985effde3bf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"73da3578-c044-4462-9bdc-0985effde3bf\") " pod="openstack/ceilometer-0" Feb 18 12:10:17 crc kubenswrapper[4717]: I0218 12:10:17.885596 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.011251 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.162322 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-config-data\") pod \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.162520 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vd68\" (UniqueName: \"kubernetes.io/projected/a005c821-9a23-46ed-9b7a-2cc11e7db64d-kube-api-access-2vd68\") pod \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.162680 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-combined-ca-bundle\") pod \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.163084 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a005c821-9a23-46ed-9b7a-2cc11e7db64d-logs\") pod \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\" (UID: \"a005c821-9a23-46ed-9b7a-2cc11e7db64d\") " Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.163584 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a005c821-9a23-46ed-9b7a-2cc11e7db64d-logs" (OuterVolumeSpecName: "logs") pod "a005c821-9a23-46ed-9b7a-2cc11e7db64d" (UID: "a005c821-9a23-46ed-9b7a-2cc11e7db64d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.164170 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a005c821-9a23-46ed-9b7a-2cc11e7db64d-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.169489 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a005c821-9a23-46ed-9b7a-2cc11e7db64d-kube-api-access-2vd68" (OuterVolumeSpecName: "kube-api-access-2vd68") pod "a005c821-9a23-46ed-9b7a-2cc11e7db64d" (UID: "a005c821-9a23-46ed-9b7a-2cc11e7db64d"). InnerVolumeSpecName "kube-api-access-2vd68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.198426 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-config-data" (OuterVolumeSpecName: "config-data") pod "a005c821-9a23-46ed-9b7a-2cc11e7db64d" (UID: "a005c821-9a23-46ed-9b7a-2cc11e7db64d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.200734 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a005c821-9a23-46ed-9b7a-2cc11e7db64d" (UID: "a005c821-9a23-46ed-9b7a-2cc11e7db64d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.266766 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.266811 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vd68\" (UniqueName: \"kubernetes.io/projected/a005c821-9a23-46ed-9b7a-2cc11e7db64d-kube-api-access-2vd68\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.266822 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005c821-9a23-46ed-9b7a-2cc11e7db64d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.387906 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.416003 4717 generic.go:334] "Generic (PLEG): container finished" podID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerID="73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101" exitCode=0 Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.416085 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.416130 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a005c821-9a23-46ed-9b7a-2cc11e7db64d","Type":"ContainerDied","Data":"73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101"} Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.416172 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a005c821-9a23-46ed-9b7a-2cc11e7db64d","Type":"ContainerDied","Data":"94fc8b96e3776ffb5a33f6b25c9cf8a9626ea7f1f7ae6910f8c1e1e5000f7a5f"} Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.416192 4717 scope.go:117] "RemoveContainer" containerID="73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.420348 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73da3578-c044-4462-9bdc-0985effde3bf","Type":"ContainerStarted","Data":"7470681f6b202e2069c38e1a946c720733ef50a7ff4108cefc378c0496a6cdeb"} Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.454625 4717 scope.go:117] "RemoveContainer" containerID="83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.456947 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.477076 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.483543 4717 scope.go:117] "RemoveContainer" containerID="73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101" Feb 18 12:10:18 crc kubenswrapper[4717]: E0218 12:10:18.484308 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101\": container with ID starting with 73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101 not found: ID does not exist" containerID="73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.484405 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101"} err="failed to get container status \"73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101\": rpc error: code = NotFound desc = could not find container \"73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101\": container with ID starting with 73842bd2159ffad5e52fd32b1924af781e256e618b1c057d92c836d540fe8101 not found: ID does not exist" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.484442 4717 scope.go:117] "RemoveContainer" containerID="83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e" Feb 18 12:10:18 crc kubenswrapper[4717]: E0218 12:10:18.484933 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e\": container with ID starting with 83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e not found: ID does not exist" containerID="83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.484963 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e"} err="failed to get container status \"83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e\": rpc error: code = NotFound desc = could not find container \"83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e\": container with ID starting with 83e0129d59962bf80de5d9ef8427b674ec2f4f9dfd22d5aa4930580a75e61a0e not found: ID does not exist" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.495238 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:18 crc kubenswrapper[4717]: E0218 12:10:18.496256 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerName="nova-api-api" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.496371 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerName="nova-api-api" Feb 18 12:10:18 crc kubenswrapper[4717]: E0218 12:10:18.496487 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerName="nova-api-log" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.496578 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerName="nova-api-log" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.496906 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerName="nova-api-api" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.497018 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" containerName="nova-api-log" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.498489 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.501292 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.501582 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.502059 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.504632 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.574842 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.574917 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-public-tls-certs\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.574954 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.575102 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-logs\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.575145 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2ndm\" (UniqueName: \"kubernetes.io/projected/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-kube-api-access-n2ndm\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.575205 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-config-data\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.677777 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-config-data\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.677909 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.677951 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-public-tls-certs\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.677983 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.678059 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-logs\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.678100 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2ndm\" (UniqueName: \"kubernetes.io/projected/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-kube-api-access-n2ndm\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.679698 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-logs\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.685500 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-public-tls-certs\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.685513 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.685850 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-config-data\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.686633 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.695456 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2ndm\" (UniqueName: \"kubernetes.io/projected/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-kube-api-access-n2ndm\") pod \"nova-api-0\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " pod="openstack/nova-api-0" Feb 18 12:10:18 crc kubenswrapper[4717]: I0218 12:10:18.846401 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:10:19 crc kubenswrapper[4717]: I0218 12:10:19.061656 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a005c821-9a23-46ed-9b7a-2cc11e7db64d" path="/var/lib/kubelet/pods/a005c821-9a23-46ed-9b7a-2cc11e7db64d/volumes" Feb 18 12:10:19 crc kubenswrapper[4717]: I0218 12:10:19.062782 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded17957-de76-4763-9646-b78cf28cad08" path="/var/lib/kubelet/pods/ded17957-de76-4763-9646-b78cf28cad08/volumes" Feb 18 12:10:19 crc kubenswrapper[4717]: I0218 12:10:19.307422 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:19 crc kubenswrapper[4717]: I0218 12:10:19.436691 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73da3578-c044-4462-9bdc-0985effde3bf","Type":"ContainerStarted","Data":"5cad67f4f0a2dfe3e302046f24449844a686f8a18519c9f57adfb20388e2371a"} Feb 18 12:10:19 crc kubenswrapper[4717]: I0218 12:10:19.438813 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a8bbcfa-7798-46a5-855b-9d81ff5b5133","Type":"ContainerStarted","Data":"9ab7e3bfcee0bf38b6cb6992196f2081f1606f0b6670a48d6f2e18c71cbc8985"} Feb 18 12:10:19 crc kubenswrapper[4717]: I0218 12:10:19.977319 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:19 crc kubenswrapper[4717]: I0218 12:10:19.998558 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.457807 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a8bbcfa-7798-46a5-855b-9d81ff5b5133","Type":"ContainerStarted","Data":"1292edad895b043158a6b7174ccc8c48c5f16e92c565679300875f63d4dc6405"} Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.458359 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a8bbcfa-7798-46a5-855b-9d81ff5b5133","Type":"ContainerStarted","Data":"e191a06115cfec40a30453550e343fff9b48fd3a47f478f79c103bfca89ca71f"} Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.460765 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73da3578-c044-4462-9bdc-0985effde3bf","Type":"ContainerStarted","Data":"a02d8d2bf0023098e2cb6404711f9aaa6c1f868cd7e95fc63305444d811362be"} Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.485928 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.485897521 podStartE2EDuration="2.485897521s" podCreationTimestamp="2026-02-18 12:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:10:20.479407124 +0000 UTC m=+1254.881508440" watchObservedRunningTime="2026-02-18 12:10:20.485897521 +0000 UTC m=+1254.887998837" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.487967 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.715014 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vksh9"] Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.716492 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.718934 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.722293 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.735185 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vksh9"] Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.839876 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nlms\" (UniqueName: \"kubernetes.io/projected/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-kube-api-access-4nlms\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.839991 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-config-data\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.840096 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-scripts\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.840163 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.941972 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nlms\" (UniqueName: \"kubernetes.io/projected/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-kube-api-access-4nlms\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.942126 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-config-data\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.942237 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-scripts\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.942327 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.949892 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.949891 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-scripts\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.950213 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-config-data\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:20 crc kubenswrapper[4717]: I0218 12:10:20.965276 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nlms\" (UniqueName: \"kubernetes.io/projected/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-kube-api-access-4nlms\") pod \"nova-cell1-cell-mapping-vksh9\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:21 crc kubenswrapper[4717]: I0218 12:10:21.099385 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:21 crc kubenswrapper[4717]: I0218 12:10:21.496344 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73da3578-c044-4462-9bdc-0985effde3bf","Type":"ContainerStarted","Data":"c25cb9e50c098944ca99fef5da20d8de1d1ad0625db79acb004050f7107e03b0"} Feb 18 12:10:21 crc kubenswrapper[4717]: I0218 12:10:21.537936 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vksh9"] Feb 18 12:10:21 crc kubenswrapper[4717]: I0218 12:10:21.990503 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.088228 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-snrr6"] Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.088596 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" podUID="0f7f26e5-3242-42f2-97b2-a989658f9950" containerName="dnsmasq-dns" containerID="cri-o://a4840c59ccfede7aeb9ad252d88c81fe2bbbc060d6f93d8e5827b94ee7820edc" gracePeriod=10 Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.516346 4717 generic.go:334] "Generic (PLEG): container finished" podID="0f7f26e5-3242-42f2-97b2-a989658f9950" containerID="a4840c59ccfede7aeb9ad252d88c81fe2bbbc060d6f93d8e5827b94ee7820edc" exitCode=0 Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.516440 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" event={"ID":"0f7f26e5-3242-42f2-97b2-a989658f9950","Type":"ContainerDied","Data":"a4840c59ccfede7aeb9ad252d88c81fe2bbbc060d6f93d8e5827b94ee7820edc"} Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.522091 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vksh9" event={"ID":"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27","Type":"ContainerStarted","Data":"66ecaaf20cf9f194565a1e8f7b3f4e9546da4a91c020cdf73195dbccdd17c4b6"} Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.522369 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vksh9" event={"ID":"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27","Type":"ContainerStarted","Data":"4d66d8d32737fc45c23d1691daff61a0d806f190efa686d9eb5b0ea39c93c9fb"} Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.551132 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vksh9" podStartSLOduration=2.551105413 podStartE2EDuration="2.551105413s" podCreationTimestamp="2026-02-18 12:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:10:22.547704925 +0000 UTC m=+1256.949806241" watchObservedRunningTime="2026-02-18 12:10:22.551105413 +0000 UTC m=+1256.953206719" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.651966 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.811447 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97n7d\" (UniqueName: \"kubernetes.io/projected/0f7f26e5-3242-42f2-97b2-a989658f9950-kube-api-access-97n7d\") pod \"0f7f26e5-3242-42f2-97b2-a989658f9950\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.811537 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-config\") pod \"0f7f26e5-3242-42f2-97b2-a989658f9950\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.811596 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-svc\") pod \"0f7f26e5-3242-42f2-97b2-a989658f9950\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.811665 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-nb\") pod \"0f7f26e5-3242-42f2-97b2-a989658f9950\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.811741 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-swift-storage-0\") pod \"0f7f26e5-3242-42f2-97b2-a989658f9950\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.811812 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-sb\") pod \"0f7f26e5-3242-42f2-97b2-a989658f9950\" (UID: \"0f7f26e5-3242-42f2-97b2-a989658f9950\") " Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.833919 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f7f26e5-3242-42f2-97b2-a989658f9950-kube-api-access-97n7d" (OuterVolumeSpecName: "kube-api-access-97n7d") pod "0f7f26e5-3242-42f2-97b2-a989658f9950" (UID: "0f7f26e5-3242-42f2-97b2-a989658f9950"). InnerVolumeSpecName "kube-api-access-97n7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.890017 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-config" (OuterVolumeSpecName: "config") pod "0f7f26e5-3242-42f2-97b2-a989658f9950" (UID: "0f7f26e5-3242-42f2-97b2-a989658f9950"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.895813 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f7f26e5-3242-42f2-97b2-a989658f9950" (UID: "0f7f26e5-3242-42f2-97b2-a989658f9950"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.897966 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0f7f26e5-3242-42f2-97b2-a989658f9950" (UID: "0f7f26e5-3242-42f2-97b2-a989658f9950"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.899555 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0f7f26e5-3242-42f2-97b2-a989658f9950" (UID: "0f7f26e5-3242-42f2-97b2-a989658f9950"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.908155 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0f7f26e5-3242-42f2-97b2-a989658f9950" (UID: "0f7f26e5-3242-42f2-97b2-a989658f9950"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.917114 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97n7d\" (UniqueName: \"kubernetes.io/projected/0f7f26e5-3242-42f2-97b2-a989658f9950-kube-api-access-97n7d\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.917158 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.917170 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.917183 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.917192 4717 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:22 crc kubenswrapper[4717]: I0218 12:10:22.917200 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f7f26e5-3242-42f2-97b2-a989658f9950-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:23 crc kubenswrapper[4717]: I0218 12:10:23.541199 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" event={"ID":"0f7f26e5-3242-42f2-97b2-a989658f9950","Type":"ContainerDied","Data":"7de73cd576f8b6589042052dbb3009f35ed29ff1de7515daec8154bb8c2829a2"} Feb 18 12:10:23 crc kubenswrapper[4717]: I0218 12:10:23.541292 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-snrr6" Feb 18 12:10:23 crc kubenswrapper[4717]: I0218 12:10:23.541720 4717 scope.go:117] "RemoveContainer" containerID="a4840c59ccfede7aeb9ad252d88c81fe2bbbc060d6f93d8e5827b94ee7820edc" Feb 18 12:10:23 crc kubenswrapper[4717]: I0218 12:10:23.555565 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"73da3578-c044-4462-9bdc-0985effde3bf","Type":"ContainerStarted","Data":"a8a5191a9bd399fc0cddab7488129d227d0abc1a632f0cbace980625cadd4cd3"} Feb 18 12:10:23 crc kubenswrapper[4717]: I0218 12:10:23.555925 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 12:10:23 crc kubenswrapper[4717]: I0218 12:10:23.569726 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-snrr6"] Feb 18 12:10:23 crc kubenswrapper[4717]: I0218 12:10:23.581247 4717 scope.go:117] "RemoveContainer" containerID="b201b857e7f37a66084ba2a6451f11313d97cac41c6d6d96fce6f7fee4944308" Feb 18 12:10:23 crc kubenswrapper[4717]: I0218 12:10:23.582580 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-snrr6"] Feb 18 12:10:23 crc kubenswrapper[4717]: I0218 12:10:23.609314 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.281919722 podStartE2EDuration="6.609283693s" podCreationTimestamp="2026-02-18 12:10:17 +0000 UTC" firstStartedPulling="2026-02-18 12:10:18.392032612 +0000 UTC m=+1252.794133928" lastFinishedPulling="2026-02-18 12:10:22.719396583 +0000 UTC m=+1257.121497899" observedRunningTime="2026-02-18 12:10:23.602047794 +0000 UTC m=+1258.004149140" watchObservedRunningTime="2026-02-18 12:10:23.609283693 +0000 UTC m=+1258.011385009" Feb 18 12:10:25 crc kubenswrapper[4717]: I0218 12:10:25.049090 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f7f26e5-3242-42f2-97b2-a989658f9950" path="/var/lib/kubelet/pods/0f7f26e5-3242-42f2-97b2-a989658f9950/volumes" Feb 18 12:10:28 crc kubenswrapper[4717]: I0218 12:10:28.615005 4717 generic.go:334] "Generic (PLEG): container finished" podID="6e0b59b0-6221-4d9e-b59e-99b82ab9dd27" containerID="66ecaaf20cf9f194565a1e8f7b3f4e9546da4a91c020cdf73195dbccdd17c4b6" exitCode=0 Feb 18 12:10:28 crc kubenswrapper[4717]: I0218 12:10:28.615089 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vksh9" event={"ID":"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27","Type":"ContainerDied","Data":"66ecaaf20cf9f194565a1e8f7b3f4e9546da4a91c020cdf73195dbccdd17c4b6"} Feb 18 12:10:28 crc kubenswrapper[4717]: I0218 12:10:28.847169 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 12:10:28 crc kubenswrapper[4717]: I0218 12:10:28.847236 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 12:10:29 crc kubenswrapper[4717]: I0218 12:10:29.863521 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 12:10:29 crc kubenswrapper[4717]: I0218 12:10:29.864302 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.176945 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.219551 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-config-data\") pod \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.219608 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-combined-ca-bundle\") pod \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.219648 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nlms\" (UniqueName: \"kubernetes.io/projected/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-kube-api-access-4nlms\") pod \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.219729 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-scripts\") pod \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\" (UID: \"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27\") " Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.235024 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-scripts" (OuterVolumeSpecName: "scripts") pod "6e0b59b0-6221-4d9e-b59e-99b82ab9dd27" (UID: "6e0b59b0-6221-4d9e-b59e-99b82ab9dd27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.235727 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-kube-api-access-4nlms" (OuterVolumeSpecName: "kube-api-access-4nlms") pod "6e0b59b0-6221-4d9e-b59e-99b82ab9dd27" (UID: "6e0b59b0-6221-4d9e-b59e-99b82ab9dd27"). InnerVolumeSpecName "kube-api-access-4nlms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.259204 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-config-data" (OuterVolumeSpecName: "config-data") pod "6e0b59b0-6221-4d9e-b59e-99b82ab9dd27" (UID: "6e0b59b0-6221-4d9e-b59e-99b82ab9dd27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.270410 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e0b59b0-6221-4d9e-b59e-99b82ab9dd27" (UID: "6e0b59b0-6221-4d9e-b59e-99b82ab9dd27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.322616 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.322684 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.322702 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nlms\" (UniqueName: \"kubernetes.io/projected/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-kube-api-access-4nlms\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.322715 4717 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.639654 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vksh9" event={"ID":"6e0b59b0-6221-4d9e-b59e-99b82ab9dd27","Type":"ContainerDied","Data":"4d66d8d32737fc45c23d1691daff61a0d806f190efa686d9eb5b0ea39c93c9fb"} Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.640214 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d66d8d32737fc45c23d1691daff61a0d806f190efa686d9eb5b0ea39c93c9fb" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.639756 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vksh9" Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.834835 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.835345 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerName="nova-api-log" containerID="cri-o://e191a06115cfec40a30453550e343fff9b48fd3a47f478f79c103bfca89ca71f" gracePeriod=30 Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.835439 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerName="nova-api-api" containerID="cri-o://1292edad895b043158a6b7174ccc8c48c5f16e92c565679300875f63d4dc6405" gracePeriod=30 Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.858388 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.858705 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="01cb18f3-386e-419b-ac3f-42603ef1dac5" containerName="nova-scheduler-scheduler" containerID="cri-o://fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" gracePeriod=30 Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.910460 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.912101 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-log" containerID="cri-o://6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34" gracePeriod=30 Feb 18 12:10:30 crc kubenswrapper[4717]: I0218 12:10:30.912207 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-metadata" containerID="cri-o://1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040" gracePeriod=30 Feb 18 12:10:31 crc kubenswrapper[4717]: E0218 12:10:31.391828 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 12:10:31 crc kubenswrapper[4717]: E0218 12:10:31.394106 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 12:10:31 crc kubenswrapper[4717]: E0218 12:10:31.395593 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 12:10:31 crc kubenswrapper[4717]: E0218 12:10:31.395674 4717 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="01cb18f3-386e-419b-ac3f-42603ef1dac5" containerName="nova-scheduler-scheduler" Feb 18 12:10:31 crc kubenswrapper[4717]: I0218 12:10:31.652674 4717 generic.go:334] "Generic (PLEG): container finished" podID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerID="e191a06115cfec40a30453550e343fff9b48fd3a47f478f79c103bfca89ca71f" exitCode=143 Feb 18 12:10:31 crc kubenswrapper[4717]: I0218 12:10:31.652739 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a8bbcfa-7798-46a5-855b-9d81ff5b5133","Type":"ContainerDied","Data":"e191a06115cfec40a30453550e343fff9b48fd3a47f478f79c103bfca89ca71f"} Feb 18 12:10:31 crc kubenswrapper[4717]: I0218 12:10:31.655225 4717 generic.go:334] "Generic (PLEG): container finished" podID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerID="6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34" exitCode=143 Feb 18 12:10:31 crc kubenswrapper[4717]: I0218 12:10:31.655275 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d7bf71-c090-4908-a5b3-0e5b1551b137","Type":"ContainerDied","Data":"6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34"} Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.044412 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:55824->10.217.0.199:8775: read: connection reset by peer" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.045096 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:55812->10.217.0.199:8775: read: connection reset by peer" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.657880 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.686858 4717 generic.go:334] "Generic (PLEG): container finished" podID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerID="1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040" exitCode=0 Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.686927 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d7bf71-c090-4908-a5b3-0e5b1551b137","Type":"ContainerDied","Data":"1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040"} Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.686970 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92d7bf71-c090-4908-a5b3-0e5b1551b137","Type":"ContainerDied","Data":"5b0de8107314b95246b124fe8b9fa2dfb7c0660e6a35febe9464bfb738b04aa0"} Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.686992 4717 scope.go:117] "RemoveContainer" containerID="1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.687164 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.725579 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-config-data\") pod \"92d7bf71-c090-4908-a5b3-0e5b1551b137\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.725755 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-combined-ca-bundle\") pod \"92d7bf71-c090-4908-a5b3-0e5b1551b137\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.725782 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d7bf71-c090-4908-a5b3-0e5b1551b137-logs\") pod \"92d7bf71-c090-4908-a5b3-0e5b1551b137\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.725898 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-nova-metadata-tls-certs\") pod \"92d7bf71-c090-4908-a5b3-0e5b1551b137\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.726012 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rhhk\" (UniqueName: \"kubernetes.io/projected/92d7bf71-c090-4908-a5b3-0e5b1551b137-kube-api-access-8rhhk\") pod \"92d7bf71-c090-4908-a5b3-0e5b1551b137\" (UID: \"92d7bf71-c090-4908-a5b3-0e5b1551b137\") " Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.727355 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92d7bf71-c090-4908-a5b3-0e5b1551b137-logs" (OuterVolumeSpecName: "logs") pod "92d7bf71-c090-4908-a5b3-0e5b1551b137" (UID: "92d7bf71-c090-4908-a5b3-0e5b1551b137"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.739944 4717 scope.go:117] "RemoveContainer" containerID="6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.742817 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d7bf71-c090-4908-a5b3-0e5b1551b137-kube-api-access-8rhhk" (OuterVolumeSpecName: "kube-api-access-8rhhk") pod "92d7bf71-c090-4908-a5b3-0e5b1551b137" (UID: "92d7bf71-c090-4908-a5b3-0e5b1551b137"). InnerVolumeSpecName "kube-api-access-8rhhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.776517 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-config-data" (OuterVolumeSpecName: "config-data") pod "92d7bf71-c090-4908-a5b3-0e5b1551b137" (UID: "92d7bf71-c090-4908-a5b3-0e5b1551b137"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.786483 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92d7bf71-c090-4908-a5b3-0e5b1551b137" (UID: "92d7bf71-c090-4908-a5b3-0e5b1551b137"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.812636 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "92d7bf71-c090-4908-a5b3-0e5b1551b137" (UID: "92d7bf71-c090-4908-a5b3-0e5b1551b137"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.828984 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.829019 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92d7bf71-c090-4908-a5b3-0e5b1551b137-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.829032 4717 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.829043 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rhhk\" (UniqueName: \"kubernetes.io/projected/92d7bf71-c090-4908-a5b3-0e5b1551b137-kube-api-access-8rhhk\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.829052 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d7bf71-c090-4908-a5b3-0e5b1551b137-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.869364 4717 scope.go:117] "RemoveContainer" containerID="1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040" Feb 18 12:10:34 crc kubenswrapper[4717]: E0218 12:10:34.870532 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040\": container with ID starting with 1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040 not found: ID does not exist" containerID="1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.870584 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040"} err="failed to get container status \"1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040\": rpc error: code = NotFound desc = could not find container \"1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040\": container with ID starting with 1d0b37b52c1518b058029985130700ff1b266a597f9c8caca55cbbae99b90040 not found: ID does not exist" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.870612 4717 scope.go:117] "RemoveContainer" containerID="6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34" Feb 18 12:10:34 crc kubenswrapper[4717]: E0218 12:10:34.871003 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34\": container with ID starting with 6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34 not found: ID does not exist" containerID="6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34" Feb 18 12:10:34 crc kubenswrapper[4717]: I0218 12:10:34.871066 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34"} err="failed to get container status \"6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34\": rpc error: code = NotFound desc = could not find container \"6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34\": container with ID starting with 6dc516091b3f13d57f0ce7db9e0aca937ad7659f630f8e14d9d8c716c392fe34 not found: ID does not exist" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.027908 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.051278 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.057061 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:10:35 crc kubenswrapper[4717]: E0218 12:10:35.057550 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-metadata" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.057570 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-metadata" Feb 18 12:10:35 crc kubenswrapper[4717]: E0218 12:10:35.057587 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-log" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.057594 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-log" Feb 18 12:10:35 crc kubenswrapper[4717]: E0218 12:10:35.057620 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7f26e5-3242-42f2-97b2-a989658f9950" containerName="dnsmasq-dns" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.057626 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7f26e5-3242-42f2-97b2-a989658f9950" containerName="dnsmasq-dns" Feb 18 12:10:35 crc kubenswrapper[4717]: E0218 12:10:35.057645 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0b59b0-6221-4d9e-b59e-99b82ab9dd27" containerName="nova-manage" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.057652 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0b59b0-6221-4d9e-b59e-99b82ab9dd27" containerName="nova-manage" Feb 18 12:10:35 crc kubenswrapper[4717]: E0218 12:10:35.057662 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7f26e5-3242-42f2-97b2-a989658f9950" containerName="init" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.057671 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7f26e5-3242-42f2-97b2-a989658f9950" containerName="init" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.057892 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e0b59b0-6221-4d9e-b59e-99b82ab9dd27" containerName="nova-manage" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.057910 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-metadata" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.057925 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f7f26e5-3242-42f2-97b2-a989658f9950" containerName="dnsmasq-dns" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.057935 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" containerName="nova-metadata-log" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.059055 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.063341 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.063616 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.077700 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.134479 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.134535 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.134628 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkx5l\" (UniqueName: \"kubernetes.io/projected/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-kube-api-access-lkx5l\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.134690 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-logs\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.134817 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-config-data\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.237184 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkx5l\" (UniqueName: \"kubernetes.io/projected/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-kube-api-access-lkx5l\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.238091 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-logs\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.238444 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-config-data\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.238614 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-logs\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.238832 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.238981 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.243575 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.244141 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.244243 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-config-data\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.260472 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkx5l\" (UniqueName: \"kubernetes.io/projected/0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc-kube-api-access-lkx5l\") pod \"nova-metadata-0\" (UID: \"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc\") " pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.381108 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.739825 4717 generic.go:334] "Generic (PLEG): container finished" podID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerID="1292edad895b043158a6b7174ccc8c48c5f16e92c565679300875f63d4dc6405" exitCode=0 Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.740003 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a8bbcfa-7798-46a5-855b-9d81ff5b5133","Type":"ContainerDied","Data":"1292edad895b043158a6b7174ccc8c48c5f16e92c565679300875f63d4dc6405"} Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.786679 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.870018 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-internal-tls-certs\") pod \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.870101 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-combined-ca-bundle\") pod \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.870151 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-public-tls-certs\") pod \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.870211 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-config-data\") pod \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.870425 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2ndm\" (UniqueName: \"kubernetes.io/projected/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-kube-api-access-n2ndm\") pod \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.870477 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-logs\") pod \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\" (UID: \"1a8bbcfa-7798-46a5-855b-9d81ff5b5133\") " Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.871648 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-logs" (OuterVolumeSpecName: "logs") pod "1a8bbcfa-7798-46a5-855b-9d81ff5b5133" (UID: "1a8bbcfa-7798-46a5-855b-9d81ff5b5133"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.891674 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-kube-api-access-n2ndm" (OuterVolumeSpecName: "kube-api-access-n2ndm") pod "1a8bbcfa-7798-46a5-855b-9d81ff5b5133" (UID: "1a8bbcfa-7798-46a5-855b-9d81ff5b5133"). InnerVolumeSpecName "kube-api-access-n2ndm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.905144 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-config-data" (OuterVolumeSpecName: "config-data") pod "1a8bbcfa-7798-46a5-855b-9d81ff5b5133" (UID: "1a8bbcfa-7798-46a5-855b-9d81ff5b5133"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.923786 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a8bbcfa-7798-46a5-855b-9d81ff5b5133" (UID: "1a8bbcfa-7798-46a5-855b-9d81ff5b5133"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.949632 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1a8bbcfa-7798-46a5-855b-9d81ff5b5133" (UID: "1a8bbcfa-7798-46a5-855b-9d81ff5b5133"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.952402 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1a8bbcfa-7798-46a5-855b-9d81ff5b5133" (UID: "1a8bbcfa-7798-46a5-855b-9d81ff5b5133"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.976013 4717 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.976062 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.976076 4717 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.976089 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.976102 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2ndm\" (UniqueName: \"kubernetes.io/projected/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-kube-api-access-n2ndm\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:35 crc kubenswrapper[4717]: I0218 12:10:35.976189 4717 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a8bbcfa-7798-46a5-855b-9d81ff5b5133-logs\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:36 crc kubenswrapper[4717]: I0218 12:10:36.008977 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 12:10:36 crc kubenswrapper[4717]: W0218 12:10:36.010594 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e3146ec_b27b_4767_9f6f_2bc5b6cb92cc.slice/crio-f7369fb0ec5f2cb514fd5a6f4978eb789307ac6286eeaa2de12f267455d80934 WatchSource:0}: Error finding container f7369fb0ec5f2cb514fd5a6f4978eb789307ac6286eeaa2de12f267455d80934: Status 404 returned error can't find the container with id f7369fb0ec5f2cb514fd5a6f4978eb789307ac6286eeaa2de12f267455d80934 Feb 18 12:10:37 crc kubenswrapper[4717]: E0218 12:10:36.398763 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210 is running failed: container process not found" containerID="fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 12:10:37 crc kubenswrapper[4717]: E0218 12:10:36.401478 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210 is running failed: container process not found" containerID="fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 12:10:37 crc kubenswrapper[4717]: E0218 12:10:36.401826 4717 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210 is running failed: container process not found" containerID="fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 12:10:37 crc kubenswrapper[4717]: E0218 12:10:36.401870 4717 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="01cb18f3-386e-419b-ac3f-42603ef1dac5" containerName="nova-scheduler-scheduler" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.598711 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.702131 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-924nx\" (UniqueName: \"kubernetes.io/projected/01cb18f3-386e-419b-ac3f-42603ef1dac5-kube-api-access-924nx\") pod \"01cb18f3-386e-419b-ac3f-42603ef1dac5\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.702228 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-config-data\") pod \"01cb18f3-386e-419b-ac3f-42603ef1dac5\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.702384 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-combined-ca-bundle\") pod \"01cb18f3-386e-419b-ac3f-42603ef1dac5\" (UID: \"01cb18f3-386e-419b-ac3f-42603ef1dac5\") " Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.708728 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb18f3-386e-419b-ac3f-42603ef1dac5-kube-api-access-924nx" (OuterVolumeSpecName: "kube-api-access-924nx") pod "01cb18f3-386e-419b-ac3f-42603ef1dac5" (UID: "01cb18f3-386e-419b-ac3f-42603ef1dac5"). InnerVolumeSpecName "kube-api-access-924nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.739237 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-config-data" (OuterVolumeSpecName: "config-data") pod "01cb18f3-386e-419b-ac3f-42603ef1dac5" (UID: "01cb18f3-386e-419b-ac3f-42603ef1dac5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.741948 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01cb18f3-386e-419b-ac3f-42603ef1dac5" (UID: "01cb18f3-386e-419b-ac3f-42603ef1dac5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.764674 4717 generic.go:334] "Generic (PLEG): container finished" podID="01cb18f3-386e-419b-ac3f-42603ef1dac5" containerID="fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" exitCode=0 Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.764756 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01cb18f3-386e-419b-ac3f-42603ef1dac5","Type":"ContainerDied","Data":"fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210"} Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.764789 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"01cb18f3-386e-419b-ac3f-42603ef1dac5","Type":"ContainerDied","Data":"90307c79a957683da019f9693da102aa4ec7f132d7f7fa09b9339db2f80b88c0"} Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.764809 4717 scope.go:117] "RemoveContainer" containerID="fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.764915 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.780457 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc","Type":"ContainerStarted","Data":"7241e7bbce11e7bcc6a66b4a857339364a17daf833274f62a2777f1271ea716f"} Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.780522 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc","Type":"ContainerStarted","Data":"05472e55375a055893a01453ca38b28eae5ec7fcb43ef05236c5e71dd4596f4f"} Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.780537 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc","Type":"ContainerStarted","Data":"f7369fb0ec5f2cb514fd5a6f4978eb789307ac6286eeaa2de12f267455d80934"} Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.790884 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1a8bbcfa-7798-46a5-855b-9d81ff5b5133","Type":"ContainerDied","Data":"9ab7e3bfcee0bf38b6cb6992196f2081f1606f0b6670a48d6f2e18c71cbc8985"} Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.791038 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.809024 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-924nx\" (UniqueName: \"kubernetes.io/projected/01cb18f3-386e-419b-ac3f-42603ef1dac5-kube-api-access-924nx\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.809077 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.809095 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb18f3-386e-419b-ac3f-42603ef1dac5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.821816 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.8217903130000002 podStartE2EDuration="1.821790313s" podCreationTimestamp="2026-02-18 12:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:10:36.814143772 +0000 UTC m=+1271.216245098" watchObservedRunningTime="2026-02-18 12:10:36.821790313 +0000 UTC m=+1271.223891629" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.831826 4717 scope.go:117] "RemoveContainer" containerID="fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" Feb 18 12:10:37 crc kubenswrapper[4717]: E0218 12:10:36.837877 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210\": container with ID starting with fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210 not found: ID does not exist" containerID="fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.837936 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210"} err="failed to get container status \"fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210\": rpc error: code = NotFound desc = could not find container \"fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210\": container with ID starting with fc36d5a735bcfebe49f8f79a3d8054dca11c1196bd3e51b10aa46ca78a43d210 not found: ID does not exist" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.837976 4717 scope.go:117] "RemoveContainer" containerID="1292edad895b043158a6b7174ccc8c48c5f16e92c565679300875f63d4dc6405" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.943511 4717 scope.go:117] "RemoveContainer" containerID="e191a06115cfec40a30453550e343fff9b48fd3a47f478f79c103bfca89ca71f" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.950523 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:36.977592 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.025606 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.052995 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01cb18f3-386e-419b-ac3f-42603ef1dac5" path="/var/lib/kubelet/pods/01cb18f3-386e-419b-ac3f-42603ef1dac5/volumes" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.053697 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d7bf71-c090-4908-a5b3-0e5b1551b137" path="/var/lib/kubelet/pods/92d7bf71-c090-4908-a5b3-0e5b1551b137/volumes" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.054440 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.054473 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:10:37 crc kubenswrapper[4717]: E0218 12:10:37.054888 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerName="nova-api-api" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.054905 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerName="nova-api-api" Feb 18 12:10:37 crc kubenswrapper[4717]: E0218 12:10:37.054928 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01cb18f3-386e-419b-ac3f-42603ef1dac5" containerName="nova-scheduler-scheduler" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.054935 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="01cb18f3-386e-419b-ac3f-42603ef1dac5" containerName="nova-scheduler-scheduler" Feb 18 12:10:37 crc kubenswrapper[4717]: E0218 12:10:37.054944 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerName="nova-api-log" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.054951 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerName="nova-api-log" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.055210 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="01cb18f3-386e-419b-ac3f-42603ef1dac5" containerName="nova-scheduler-scheduler" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.055237 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerName="nova-api-api" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.055247 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" containerName="nova-api-log" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.056163 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.059197 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.066004 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.081442 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.085734 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.088726 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.088825 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.088916 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.094068 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.242521 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4f4f54-2066-4277-a45f-aefd9dc8130c-logs\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.242585 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-config-data\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.242647 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhw2l\" (UniqueName: \"kubernetes.io/projected/e9941a28-9836-48c1-bab2-c55c92861692-kube-api-access-mhw2l\") pod \"nova-scheduler-0\" (UID: \"e9941a28-9836-48c1-bab2-c55c92861692\") " pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.243686 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.244027 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.244291 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xhvt\" (UniqueName: \"kubernetes.io/projected/2b4f4f54-2066-4277-a45f-aefd9dc8130c-kube-api-access-2xhvt\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.244338 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.244618 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9941a28-9836-48c1-bab2-c55c92861692-config-data\") pod \"nova-scheduler-0\" (UID: \"e9941a28-9836-48c1-bab2-c55c92861692\") " pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.245039 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9941a28-9836-48c1-bab2-c55c92861692-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9941a28-9836-48c1-bab2-c55c92861692\") " pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.346518 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4f4f54-2066-4277-a45f-aefd9dc8130c-logs\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.346640 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-config-data\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.346679 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhw2l\" (UniqueName: \"kubernetes.io/projected/e9941a28-9836-48c1-bab2-c55c92861692-kube-api-access-mhw2l\") pod \"nova-scheduler-0\" (UID: \"e9941a28-9836-48c1-bab2-c55c92861692\") " pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.346714 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.346774 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.346799 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xhvt\" (UniqueName: \"kubernetes.io/projected/2b4f4f54-2066-4277-a45f-aefd9dc8130c-kube-api-access-2xhvt\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.346822 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.346853 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9941a28-9836-48c1-bab2-c55c92861692-config-data\") pod \"nova-scheduler-0\" (UID: \"e9941a28-9836-48c1-bab2-c55c92861692\") " pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.346905 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9941a28-9836-48c1-bab2-c55c92861692-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9941a28-9836-48c1-bab2-c55c92861692\") " pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.347078 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b4f4f54-2066-4277-a45f-aefd9dc8130c-logs\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.352357 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.352612 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-public-tls-certs\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.352712 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-config-data\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.354005 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b4f4f54-2066-4277-a45f-aefd9dc8130c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.364439 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9941a28-9836-48c1-bab2-c55c92861692-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9941a28-9836-48c1-bab2-c55c92861692\") " pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.365685 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9941a28-9836-48c1-bab2-c55c92861692-config-data\") pod \"nova-scheduler-0\" (UID: \"e9941a28-9836-48c1-bab2-c55c92861692\") " pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.378951 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhw2l\" (UniqueName: \"kubernetes.io/projected/e9941a28-9836-48c1-bab2-c55c92861692-kube-api-access-mhw2l\") pod \"nova-scheduler-0\" (UID: \"e9941a28-9836-48c1-bab2-c55c92861692\") " pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.378987 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xhvt\" (UniqueName: \"kubernetes.io/projected/2b4f4f54-2066-4277-a45f-aefd9dc8130c-kube-api-access-2xhvt\") pod \"nova-api-0\" (UID: \"2b4f4f54-2066-4277-a45f-aefd9dc8130c\") " pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.391780 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.413868 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.912064 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 12:10:37 crc kubenswrapper[4717]: W0218 12:10:37.918181 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9941a28_9836_48c1_bab2_c55c92861692.slice/crio-ff4c95907ba1543cc872d182a55c0bf778d042e7f7c7c61f8d35a431c02f9b9e WatchSource:0}: Error finding container ff4c95907ba1543cc872d182a55c0bf778d042e7f7c7c61f8d35a431c02f9b9e: Status 404 returned error can't find the container with id ff4c95907ba1543cc872d182a55c0bf778d042e7f7c7c61f8d35a431c02f9b9e Feb 18 12:10:37 crc kubenswrapper[4717]: I0218 12:10:37.996216 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 12:10:38 crc kubenswrapper[4717]: I0218 12:10:38.840030 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b4f4f54-2066-4277-a45f-aefd9dc8130c","Type":"ContainerStarted","Data":"542f17d8bf607285c64ecfe8e1df0e6681da508d88e86b9be6722948a15a11c2"} Feb 18 12:10:38 crc kubenswrapper[4717]: I0218 12:10:38.840486 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b4f4f54-2066-4277-a45f-aefd9dc8130c","Type":"ContainerStarted","Data":"6aa86496aabc2567d4343d33ea9af00a579a3fe253f6c92627e5073bee2b71ec"} Feb 18 12:10:38 crc kubenswrapper[4717]: I0218 12:10:38.840502 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2b4f4f54-2066-4277-a45f-aefd9dc8130c","Type":"ContainerStarted","Data":"60e7df3bff91b60e51fadd22030a192ebf63e39e8550d814d2a2fd2c066d504f"} Feb 18 12:10:38 crc kubenswrapper[4717]: I0218 12:10:38.842543 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9941a28-9836-48c1-bab2-c55c92861692","Type":"ContainerStarted","Data":"689c81a8d0fec5f06f93bc5966d373cbd71e4d8ac4b66bf16e22fb0b13fd3279"} Feb 18 12:10:38 crc kubenswrapper[4717]: I0218 12:10:38.842606 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9941a28-9836-48c1-bab2-c55c92861692","Type":"ContainerStarted","Data":"ff4c95907ba1543cc872d182a55c0bf778d042e7f7c7c61f8d35a431c02f9b9e"} Feb 18 12:10:38 crc kubenswrapper[4717]: I0218 12:10:38.869866 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.86983781 podStartE2EDuration="2.86983781s" podCreationTimestamp="2026-02-18 12:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:10:38.865676519 +0000 UTC m=+1273.267777845" watchObservedRunningTime="2026-02-18 12:10:38.86983781 +0000 UTC m=+1273.271939126" Feb 18 12:10:39 crc kubenswrapper[4717]: I0218 12:10:39.049710 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a8bbcfa-7798-46a5-855b-9d81ff5b5133" path="/var/lib/kubelet/pods/1a8bbcfa-7798-46a5-855b-9d81ff5b5133/volumes" Feb 18 12:10:40 crc kubenswrapper[4717]: I0218 12:10:40.381624 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 12:10:40 crc kubenswrapper[4717]: I0218 12:10:40.383083 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 12:10:42 crc kubenswrapper[4717]: I0218 12:10:42.392514 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 12:10:42 crc kubenswrapper[4717]: I0218 12:10:42.773469 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:10:42 crc kubenswrapper[4717]: I0218 12:10:42.774245 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:10:42 crc kubenswrapper[4717]: I0218 12:10:42.774497 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:10:42 crc kubenswrapper[4717]: I0218 12:10:42.775671 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8619a78ac97f793404bae81465977701fb9cf7482a56c2df47cd47e2df2d8754"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:10:42 crc kubenswrapper[4717]: I0218 12:10:42.775848 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://8619a78ac97f793404bae81465977701fb9cf7482a56c2df47cd47e2df2d8754" gracePeriod=600 Feb 18 12:10:43 crc kubenswrapper[4717]: I0218 12:10:43.894275 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="8619a78ac97f793404bae81465977701fb9cf7482a56c2df47cd47e2df2d8754" exitCode=0 Feb 18 12:10:43 crc kubenswrapper[4717]: I0218 12:10:43.894384 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"8619a78ac97f793404bae81465977701fb9cf7482a56c2df47cd47e2df2d8754"} Feb 18 12:10:43 crc kubenswrapper[4717]: I0218 12:10:43.895086 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"8029dd99738cdaa1b1753862d62d575af02645633ab25bd8d0586f5ebdffe632"} Feb 18 12:10:43 crc kubenswrapper[4717]: I0218 12:10:43.895119 4717 scope.go:117] "RemoveContainer" containerID="ed34df06cd7a6fd0e8376737775379dc22fdfd46578a3bf716f5810ce3e16a5f" Feb 18 12:10:43 crc kubenswrapper[4717]: I0218 12:10:43.917868 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=7.917849683 podStartE2EDuration="7.917849683s" podCreationTimestamp="2026-02-18 12:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:10:38.904437539 +0000 UTC m=+1273.306538855" watchObservedRunningTime="2026-02-18 12:10:43.917849683 +0000 UTC m=+1278.319950999" Feb 18 12:10:45 crc kubenswrapper[4717]: I0218 12:10:45.381995 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 12:10:45 crc kubenswrapper[4717]: I0218 12:10:45.382377 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 12:10:46 crc kubenswrapper[4717]: I0218 12:10:46.399479 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 12:10:46 crc kubenswrapper[4717]: I0218 12:10:46.399512 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 12:10:47 crc kubenswrapper[4717]: I0218 12:10:47.392761 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 12:10:47 crc kubenswrapper[4717]: I0218 12:10:47.414969 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 12:10:47 crc kubenswrapper[4717]: I0218 12:10:47.415085 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 12:10:47 crc kubenswrapper[4717]: I0218 12:10:47.434576 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 12:10:47 crc kubenswrapper[4717]: I0218 12:10:47.898403 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 12:10:48 crc kubenswrapper[4717]: I0218 12:10:48.015926 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 12:10:48 crc kubenswrapper[4717]: I0218 12:10:48.420513 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2b4f4f54-2066-4277-a45f-aefd9dc8130c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.211:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 12:10:48 crc kubenswrapper[4717]: I0218 12:10:48.424504 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2b4f4f54-2066-4277-a45f-aefd9dc8130c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.211:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 12:10:55 crc kubenswrapper[4717]: I0218 12:10:55.387855 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 12:10:55 crc kubenswrapper[4717]: I0218 12:10:55.388434 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 12:10:55 crc kubenswrapper[4717]: I0218 12:10:55.394075 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 12:10:55 crc kubenswrapper[4717]: I0218 12:10:55.394564 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 12:10:57 crc kubenswrapper[4717]: I0218 12:10:57.425614 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 12:10:57 crc kubenswrapper[4717]: I0218 12:10:57.426554 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 12:10:57 crc kubenswrapper[4717]: I0218 12:10:57.431134 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 12:10:57 crc kubenswrapper[4717]: I0218 12:10:57.433735 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 12:10:58 crc kubenswrapper[4717]: I0218 12:10:58.069741 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 12:10:58 crc kubenswrapper[4717]: I0218 12:10:58.087321 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 12:11:06 crc kubenswrapper[4717]: I0218 12:11:06.583102 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:11:07 crc kubenswrapper[4717]: I0218 12:11:07.797361 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:11:11 crc kubenswrapper[4717]: I0218 12:11:11.362222 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="636b0761-84e8-4d2f-88f4-4845e2a05f80" containerName="rabbitmq" containerID="cri-o://e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2" gracePeriod=604796 Feb 18 12:11:11 crc kubenswrapper[4717]: I0218 12:11:11.929691 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="468aa28e-8245-4024-815a-24d469dc17bf" containerName="rabbitmq" containerID="cri-o://de82c264624647485061aa85113df9ce8f39d0e71721f79034675fb3dfb42eda" gracePeriod=604796 Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.009245 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.088396 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nlx7\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-kube-api-access-6nlx7\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.088776 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-tls\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.088819 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-plugins\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.088927 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-config-data\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.088973 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/636b0761-84e8-4d2f-88f4-4845e2a05f80-erlang-cookie-secret\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.089476 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.089841 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-plugins-conf\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.089995 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-server-conf\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.090063 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/636b0761-84e8-4d2f-88f4-4845e2a05f80-pod-info\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.090142 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-erlang-cookie\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.090224 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.090279 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-confd\") pod \"636b0761-84e8-4d2f-88f4-4845e2a05f80\" (UID: \"636b0761-84e8-4d2f-88f4-4845e2a05f80\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.090354 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.090922 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.091919 4717 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.091953 4717 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.091965 4717 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.101813 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/636b0761-84e8-4d2f-88f4-4845e2a05f80-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.103581 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.106288 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-kube-api-access-6nlx7" (OuterVolumeSpecName: "kube-api-access-6nlx7") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "kube-api-access-6nlx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.106522 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.111776 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/636b0761-84e8-4d2f-88f4-4845e2a05f80-pod-info" (OuterVolumeSpecName: "pod-info") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.156693 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-config-data" (OuterVolumeSpecName: "config-data") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.195490 4717 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/636b0761-84e8-4d2f-88f4-4845e2a05f80-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.195566 4717 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.195585 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nlx7\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-kube-api-access-6nlx7\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.195599 4717 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.195612 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.195624 4717 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/636b0761-84e8-4d2f-88f4-4845e2a05f80-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.233106 4717 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.252147 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-server-conf" (OuterVolumeSpecName: "server-conf") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.285601 4717 generic.go:334] "Generic (PLEG): container finished" podID="468aa28e-8245-4024-815a-24d469dc17bf" containerID="de82c264624647485061aa85113df9ce8f39d0e71721f79034675fb3dfb42eda" exitCode=0 Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.285698 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"468aa28e-8245-4024-815a-24d469dc17bf","Type":"ContainerDied","Data":"de82c264624647485061aa85113df9ce8f39d0e71721f79034675fb3dfb42eda"} Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.294950 4717 generic.go:334] "Generic (PLEG): container finished" podID="636b0761-84e8-4d2f-88f4-4845e2a05f80" containerID="e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2" exitCode=0 Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.295012 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"636b0761-84e8-4d2f-88f4-4845e2a05f80","Type":"ContainerDied","Data":"e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2"} Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.295056 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"636b0761-84e8-4d2f-88f4-4845e2a05f80","Type":"ContainerDied","Data":"0c2cd2d34fd880c3feaf3904e76bb042509613d2b6030490ce2a5a50a89967f4"} Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.295053 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.295076 4717 scope.go:117] "RemoveContainer" containerID="e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.297187 4717 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.297208 4717 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/636b0761-84e8-4d2f-88f4-4845e2a05f80-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.325539 4717 scope.go:117] "RemoveContainer" containerID="8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.355662 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "636b0761-84e8-4d2f-88f4-4845e2a05f80" (UID: "636b0761-84e8-4d2f-88f4-4845e2a05f80"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.364803 4717 scope.go:117] "RemoveContainer" containerID="e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2" Feb 18 12:11:18 crc kubenswrapper[4717]: E0218 12:11:18.365802 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2\": container with ID starting with e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2 not found: ID does not exist" containerID="e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.365860 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2"} err="failed to get container status \"e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2\": rpc error: code = NotFound desc = could not find container \"e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2\": container with ID starting with e213779ca6b86907b4e563ef03958688b5a8c45dec2d35346bfbf6cb3e8ba5b2 not found: ID does not exist" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.365895 4717 scope.go:117] "RemoveContainer" containerID="8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28" Feb 18 12:11:18 crc kubenswrapper[4717]: E0218 12:11:18.366547 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28\": container with ID starting with 8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28 not found: ID does not exist" containerID="8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.366589 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28"} err="failed to get container status \"8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28\": rpc error: code = NotFound desc = could not find container \"8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28\": container with ID starting with 8fc0fd4dff3445e269bbeb39633c839f589393a7bc4e27075b17d0d8abab0d28 not found: ID does not exist" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.398829 4717 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/636b0761-84e8-4d2f-88f4-4845e2a05f80-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.541677 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.602419 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-config-data\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.602505 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-plugins-conf\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.602538 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-erlang-cookie\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.602598 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-server-conf\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.602625 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/468aa28e-8245-4024-815a-24d469dc17bf-erlang-cookie-secret\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.602711 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-confd\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.602781 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-plugins\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.602919 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-tls\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.602962 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.602993 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9hnw\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-kube-api-access-r9hnw\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.603034 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/468aa28e-8245-4024-815a-24d469dc17bf-pod-info\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.603650 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.603958 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.603970 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.613796 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.614526 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.614560 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-kube-api-access-r9hnw" (OuterVolumeSpecName: "kube-api-access-r9hnw") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "kube-api-access-r9hnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.614594 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/468aa28e-8245-4024-815a-24d469dc17bf-pod-info" (OuterVolumeSpecName: "pod-info") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.614658 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468aa28e-8245-4024-815a-24d469dc17bf-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.676399 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.681227 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-config-data" (OuterVolumeSpecName: "config-data") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.702373 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.703798 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-server-conf" (OuterVolumeSpecName: "server-conf") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.705283 4717 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.705355 4717 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.705373 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9hnw\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-kube-api-access-r9hnw\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.705386 4717 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/468aa28e-8245-4024-815a-24d469dc17bf-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.705398 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.705405 4717 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.705417 4717 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.705425 4717 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/468aa28e-8245-4024-815a-24d469dc17bf-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.705445 4717 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/468aa28e-8245-4024-815a-24d469dc17bf-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.705465 4717 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.713479 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:11:18 crc kubenswrapper[4717]: E0218 12:11:18.714432 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="468aa28e-8245-4024-815a-24d469dc17bf" containerName="setup-container" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.714553 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="468aa28e-8245-4024-815a-24d469dc17bf" containerName="setup-container" Feb 18 12:11:18 crc kubenswrapper[4717]: E0218 12:11:18.714648 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636b0761-84e8-4d2f-88f4-4845e2a05f80" containerName="rabbitmq" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.714714 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="636b0761-84e8-4d2f-88f4-4845e2a05f80" containerName="rabbitmq" Feb 18 12:11:18 crc kubenswrapper[4717]: E0218 12:11:18.714768 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636b0761-84e8-4d2f-88f4-4845e2a05f80" containerName="setup-container" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.714840 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="636b0761-84e8-4d2f-88f4-4845e2a05f80" containerName="setup-container" Feb 18 12:11:18 crc kubenswrapper[4717]: E0218 12:11:18.714910 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="468aa28e-8245-4024-815a-24d469dc17bf" containerName="rabbitmq" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.714975 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="468aa28e-8245-4024-815a-24d469dc17bf" containerName="rabbitmq" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.715264 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="468aa28e-8245-4024-815a-24d469dc17bf" containerName="rabbitmq" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.715363 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="636b0761-84e8-4d2f-88f4-4845e2a05f80" containerName="rabbitmq" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.716925 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.720661 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.721493 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.722460 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.722739 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.723033 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fn8c9" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.724330 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.724613 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.728888 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.806335 4717 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.806551 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.808929 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-confd\") pod \"468aa28e-8245-4024-815a-24d469dc17bf\" (UID: \"468aa28e-8245-4024-815a-24d469dc17bf\") " Feb 18 12:11:18 crc kubenswrapper[4717]: W0218 12:11:18.809099 4717 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/468aa28e-8245-4024-815a-24d469dc17bf/volumes/kubernetes.io~projected/rabbitmq-confd Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.809434 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "468aa28e-8245-4024-815a-24d469dc17bf" (UID: "468aa28e-8245-4024-815a-24d469dc17bf"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.810472 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68js6\" (UniqueName: \"kubernetes.io/projected/57c90bed-ebc6-4053-b92b-1622edda048a-kube-api-access-68js6\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.810654 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.810755 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57c90bed-ebc6-4053-b92b-1622edda048a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.810828 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.810884 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.810910 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.810983 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57c90bed-ebc6-4053-b92b-1622edda048a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.811090 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c90bed-ebc6-4053-b92b-1622edda048a-config-data\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.811117 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57c90bed-ebc6-4053-b92b-1622edda048a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.811199 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57c90bed-ebc6-4053-b92b-1622edda048a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.811270 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.813380 4717 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/468aa28e-8245-4024-815a-24d469dc17bf-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.814099 4717 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.916316 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57c90bed-ebc6-4053-b92b-1622edda048a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.916697 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c90bed-ebc6-4053-b92b-1622edda048a-config-data\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.916778 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57c90bed-ebc6-4053-b92b-1622edda048a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.916890 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57c90bed-ebc6-4053-b92b-1622edda048a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.917008 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.917099 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68js6\" (UniqueName: \"kubernetes.io/projected/57c90bed-ebc6-4053-b92b-1622edda048a-kube-api-access-68js6\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.917196 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.917312 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57c90bed-ebc6-4053-b92b-1622edda048a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.917398 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.917467 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.917537 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.917586 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.917793 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.918174 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.918457 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57c90bed-ebc6-4053-b92b-1622edda048a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.918567 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57c90bed-ebc6-4053-b92b-1622edda048a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.919145 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57c90bed-ebc6-4053-b92b-1622edda048a-config-data\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.922678 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.925054 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57c90bed-ebc6-4053-b92b-1622edda048a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.925672 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/57c90bed-ebc6-4053-b92b-1622edda048a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.925978 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57c90bed-ebc6-4053-b92b-1622edda048a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.940605 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68js6\" (UniqueName: \"kubernetes.io/projected/57c90bed-ebc6-4053-b92b-1622edda048a-kube-api-access-68js6\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:18 crc kubenswrapper[4717]: I0218 12:11:18.958552 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"57c90bed-ebc6-4053-b92b-1622edda048a\") " pod="openstack/rabbitmq-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.054002 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636b0761-84e8-4d2f-88f4-4845e2a05f80" path="/var/lib/kubelet/pods/636b0761-84e8-4d2f-88f4-4845e2a05f80/volumes" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.180989 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.327464 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"468aa28e-8245-4024-815a-24d469dc17bf","Type":"ContainerDied","Data":"0f4ccb89f9f252e31de5481ad0e9ab8beda8ddb021193db7cce6b8a58bd4cb33"} Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.327504 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.327544 4717 scope.go:117] "RemoveContainer" containerID="de82c264624647485061aa85113df9ce8f39d0e71721f79034675fb3dfb42eda" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.395185 4717 scope.go:117] "RemoveContainer" containerID="ab59f199ed48325d66abc30818dd4af42a257558bd4238a3e87ab39177370fd9" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.400334 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.430036 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.458936 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.461778 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.464662 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.469278 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.469438 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dj5rd" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.469589 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.469617 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.469438 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.469948 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.487138 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539201 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539305 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539368 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539408 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539463 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c31df4b-bbb2-4bdf-9c36-db03b261067c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539503 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c31df4b-bbb2-4bdf-9c36-db03b261067c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539557 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c31df4b-bbb2-4bdf-9c36-db03b261067c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539616 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sg4v\" (UniqueName: \"kubernetes.io/projected/1c31df4b-bbb2-4bdf-9c36-db03b261067c-kube-api-access-5sg4v\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539672 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c31df4b-bbb2-4bdf-9c36-db03b261067c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539817 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c31df4b-bbb2-4bdf-9c36-db03b261067c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.539849 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641480 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sg4v\" (UniqueName: \"kubernetes.io/projected/1c31df4b-bbb2-4bdf-9c36-db03b261067c-kube-api-access-5sg4v\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641572 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c31df4b-bbb2-4bdf-9c36-db03b261067c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641618 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c31df4b-bbb2-4bdf-9c36-db03b261067c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641643 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641681 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641723 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641750 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641778 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641810 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c31df4b-bbb2-4bdf-9c36-db03b261067c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641833 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c31df4b-bbb2-4bdf-9c36-db03b261067c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.641866 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c31df4b-bbb2-4bdf-9c36-db03b261067c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.642463 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.644306 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c31df4b-bbb2-4bdf-9c36-db03b261067c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.647585 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.648666 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c31df4b-bbb2-4bdf-9c36-db03b261067c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.648918 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.650936 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c31df4b-bbb2-4bdf-9c36-db03b261067c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.651845 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.654486 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c31df4b-bbb2-4bdf-9c36-db03b261067c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.656558 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c31df4b-bbb2-4bdf-9c36-db03b261067c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.659596 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c31df4b-bbb2-4bdf-9c36-db03b261067c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.671413 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sg4v\" (UniqueName: \"kubernetes.io/projected/1c31df4b-bbb2-4bdf-9c36-db03b261067c-kube-api-access-5sg4v\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.693993 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c31df4b-bbb2-4bdf-9c36-db03b261067c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.739638 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 12:11:19 crc kubenswrapper[4717]: I0218 12:11:19.814617 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.277014 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-brlgk"] Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.279826 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.282749 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.304204 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-brlgk"] Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.357106 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vv2f\" (UniqueName: \"kubernetes.io/projected/2c422c4c-87bb-416a-9912-4d27d870af50-kube-api-access-8vv2f\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.357308 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-config\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.357393 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.357436 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.357468 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.357498 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.357528 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.357913 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"57c90bed-ebc6-4053-b92b-1622edda048a","Type":"ContainerStarted","Data":"697b7083d0fa0207b3a95e037a2b074458f76f41fe2d49c60a7a2b3d35dfef25"} Feb 18 12:11:20 crc kubenswrapper[4717]: W0218 12:11:20.360983 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c31df4b_bbb2_4bdf_9c36_db03b261067c.slice/crio-7faa14f512af1724b0d087a653de1e4397b107379ea7ca4652912deea363e1a3 WatchSource:0}: Error finding container 7faa14f512af1724b0d087a653de1e4397b107379ea7ca4652912deea363e1a3: Status 404 returned error can't find the container with id 7faa14f512af1724b0d087a653de1e4397b107379ea7ca4652912deea363e1a3 Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.361334 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.460927 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.461431 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.461506 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vv2f\" (UniqueName: \"kubernetes.io/projected/2c422c4c-87bb-416a-9912-4d27d870af50-kube-api-access-8vv2f\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.461590 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-config\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.461646 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.461679 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.461708 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.464295 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.464731 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.464899 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-config\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.465076 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.466220 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.466513 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.485908 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vv2f\" (UniqueName: \"kubernetes.io/projected/2c422c4c-87bb-416a-9912-4d27d870af50-kube-api-access-8vv2f\") pod \"dnsmasq-dns-79bd4cc8c9-brlgk\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:20 crc kubenswrapper[4717]: I0218 12:11:20.604455 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:21 crc kubenswrapper[4717]: I0218 12:11:21.064029 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="468aa28e-8245-4024-815a-24d469dc17bf" path="/var/lib/kubelet/pods/468aa28e-8245-4024-815a-24d469dc17bf/volumes" Feb 18 12:11:21 crc kubenswrapper[4717]: I0218 12:11:21.270522 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-brlgk"] Feb 18 12:11:21 crc kubenswrapper[4717]: W0218 12:11:21.278526 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c422c4c_87bb_416a_9912_4d27d870af50.slice/crio-b271b0c0cd023d9db32175e7befdaeba0e725ffd3d998c66eedefca70f3520fe WatchSource:0}: Error finding container b271b0c0cd023d9db32175e7befdaeba0e725ffd3d998c66eedefca70f3520fe: Status 404 returned error can't find the container with id b271b0c0cd023d9db32175e7befdaeba0e725ffd3d998c66eedefca70f3520fe Feb 18 12:11:21 crc kubenswrapper[4717]: I0218 12:11:21.373565 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c31df4b-bbb2-4bdf-9c36-db03b261067c","Type":"ContainerStarted","Data":"7faa14f512af1724b0d087a653de1e4397b107379ea7ca4652912deea363e1a3"} Feb 18 12:11:21 crc kubenswrapper[4717]: I0218 12:11:21.375194 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" event={"ID":"2c422c4c-87bb-416a-9912-4d27d870af50","Type":"ContainerStarted","Data":"b271b0c0cd023d9db32175e7befdaeba0e725ffd3d998c66eedefca70f3520fe"} Feb 18 12:11:22 crc kubenswrapper[4717]: I0218 12:11:22.389367 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"57c90bed-ebc6-4053-b92b-1622edda048a","Type":"ContainerStarted","Data":"b4af9e7aefd2cb7fa11fd0256cb59aa4aa8bc664064ca667b46558aafe4b4acf"} Feb 18 12:11:22 crc kubenswrapper[4717]: I0218 12:11:22.391179 4717 generic.go:334] "Generic (PLEG): container finished" podID="2c422c4c-87bb-416a-9912-4d27d870af50" containerID="90795cc6192194d54995a213d13b3c592a9542adadf79097a805b8c01a64e2ce" exitCode=0 Feb 18 12:11:22 crc kubenswrapper[4717]: I0218 12:11:22.391348 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" event={"ID":"2c422c4c-87bb-416a-9912-4d27d870af50","Type":"ContainerDied","Data":"90795cc6192194d54995a213d13b3c592a9542adadf79097a805b8c01a64e2ce"} Feb 18 12:11:22 crc kubenswrapper[4717]: I0218 12:11:22.393838 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c31df4b-bbb2-4bdf-9c36-db03b261067c","Type":"ContainerStarted","Data":"20d7b2db57ddc515e5a16fe89a70cfc90edb2080595524e0fe3649ed7d5f1838"} Feb 18 12:11:23 crc kubenswrapper[4717]: I0218 12:11:23.407775 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" event={"ID":"2c422c4c-87bb-416a-9912-4d27d870af50","Type":"ContainerStarted","Data":"81d0c447c4cd1bb2bf20c64b3b80c91bcde44dceb8277a48d7cd400f80466aeb"} Feb 18 12:11:23 crc kubenswrapper[4717]: I0218 12:11:23.409556 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="468aa28e-8245-4024-815a-24d469dc17bf" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: i/o timeout" Feb 18 12:11:23 crc kubenswrapper[4717]: I0218 12:11:23.435475 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" podStartSLOduration=3.435446688 podStartE2EDuration="3.435446688s" podCreationTimestamp="2026-02-18 12:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:11:23.427629164 +0000 UTC m=+1317.829730480" watchObservedRunningTime="2026-02-18 12:11:23.435446688 +0000 UTC m=+1317.837548004" Feb 18 12:11:24 crc kubenswrapper[4717]: I0218 12:11:24.417955 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.606514 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.683295 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-fbdml"] Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.683650 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" podUID="a1e157b4-5706-45aa-b5bf-9c5bd109b501" containerName="dnsmasq-dns" containerID="cri-o://7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330" gracePeriod=10 Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.865874 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-q9j9f"] Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.868108 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.888628 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-q9j9f"] Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.925489 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.925569 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-config\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.925609 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-dns-svc\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.925664 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.925699 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.925752 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2wl\" (UniqueName: \"kubernetes.io/projected/dffcf399-439f-4698-8a0a-b247675685be-kube-api-access-mv2wl\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:30 crc kubenswrapper[4717]: I0218 12:11:30.925814 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:30 crc kubenswrapper[4717]: E0218 12:11:30.946097 4717 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1e157b4_5706_45aa_b5bf_9c5bd109b501.slice/crio-7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330.scope\": RecentStats: unable to find data in memory cache]" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.027826 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.028270 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2wl\" (UniqueName: \"kubernetes.io/projected/dffcf399-439f-4698-8a0a-b247675685be-kube-api-access-mv2wl\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.028337 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.028407 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.028451 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-config\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.028481 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-dns-svc\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.028524 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.028931 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.031654 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.031655 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.031838 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.031830 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-dns-svc\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.032141 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffcf399-439f-4698-8a0a-b247675685be-config\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.058510 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2wl\" (UniqueName: \"kubernetes.io/projected/dffcf399-439f-4698-8a0a-b247675685be-kube-api-access-mv2wl\") pod \"dnsmasq-dns-55478c4467-q9j9f\" (UID: \"dffcf399-439f-4698-8a0a-b247675685be\") " pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.200766 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.330302 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.334434 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khkjl\" (UniqueName: \"kubernetes.io/projected/a1e157b4-5706-45aa-b5bf-9c5bd109b501-kube-api-access-khkjl\") pod \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.334536 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-nb\") pod \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.334731 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-swift-storage-0\") pod \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.334808 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-svc\") pod \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.334865 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-sb\") pod \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.345702 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e157b4-5706-45aa-b5bf-9c5bd109b501-kube-api-access-khkjl" (OuterVolumeSpecName: "kube-api-access-khkjl") pod "a1e157b4-5706-45aa-b5bf-9c5bd109b501" (UID: "a1e157b4-5706-45aa-b5bf-9c5bd109b501"). InnerVolumeSpecName "kube-api-access-khkjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.438054 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-config\") pod \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\" (UID: \"a1e157b4-5706-45aa-b5bf-9c5bd109b501\") " Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.439778 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khkjl\" (UniqueName: \"kubernetes.io/projected/a1e157b4-5706-45aa-b5bf-9c5bd109b501-kube-api-access-khkjl\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.445235 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1e157b4-5706-45aa-b5bf-9c5bd109b501" (UID: "a1e157b4-5706-45aa-b5bf-9c5bd109b501"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.445231 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a1e157b4-5706-45aa-b5bf-9c5bd109b501" (UID: "a1e157b4-5706-45aa-b5bf-9c5bd109b501"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.448297 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a1e157b4-5706-45aa-b5bf-9c5bd109b501" (UID: "a1e157b4-5706-45aa-b5bf-9c5bd109b501"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.492111 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a1e157b4-5706-45aa-b5bf-9c5bd109b501" (UID: "a1e157b4-5706-45aa-b5bf-9c5bd109b501"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.493492 4717 generic.go:334] "Generic (PLEG): container finished" podID="a1e157b4-5706-45aa-b5bf-9c5bd109b501" containerID="7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330" exitCode=0 Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.493564 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" event={"ID":"a1e157b4-5706-45aa-b5bf-9c5bd109b501","Type":"ContainerDied","Data":"7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330"} Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.493600 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" event={"ID":"a1e157b4-5706-45aa-b5bf-9c5bd109b501","Type":"ContainerDied","Data":"50406ac86752084b324fc33d64e1893255ae06ed5e1c3873539ffedb54b3b8ce"} Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.493618 4717 scope.go:117] "RemoveContainer" containerID="7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.493610 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-fbdml" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.503536 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-config" (OuterVolumeSpecName: "config") pod "a1e157b4-5706-45aa-b5bf-9c5bd109b501" (UID: "a1e157b4-5706-45aa-b5bf-9c5bd109b501"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.524315 4717 scope.go:117] "RemoveContainer" containerID="d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.544227 4717 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.544296 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.544310 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.544322 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.544333 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1e157b4-5706-45aa-b5bf-9c5bd109b501-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.569714 4717 scope.go:117] "RemoveContainer" containerID="7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330" Feb 18 12:11:31 crc kubenswrapper[4717]: E0218 12:11:31.570435 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330\": container with ID starting with 7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330 not found: ID does not exist" containerID="7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.570500 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330"} err="failed to get container status \"7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330\": rpc error: code = NotFound desc = could not find container \"7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330\": container with ID starting with 7e4f0426d4fc9a404f8e3a17a0ed0132824fa56217543b65cfc846955501b330 not found: ID does not exist" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.570536 4717 scope.go:117] "RemoveContainer" containerID="d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd" Feb 18 12:11:31 crc kubenswrapper[4717]: E0218 12:11:31.571121 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd\": container with ID starting with d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd not found: ID does not exist" containerID="d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.571180 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd"} err="failed to get container status \"d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd\": rpc error: code = NotFound desc = could not find container \"d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd\": container with ID starting with d79aeba9e6c587c26372dda795bcd3d46de2aebebcfe5685c6fd15612072a5cd not found: ID does not exist" Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.750201 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-q9j9f"] Feb 18 12:11:31 crc kubenswrapper[4717]: W0218 12:11:31.753539 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddffcf399_439f_4698_8a0a_b247675685be.slice/crio-2adcb1333266ff5657594eed8ce9876d6f7df4a88415a1e787de619a78aee0bb WatchSource:0}: Error finding container 2adcb1333266ff5657594eed8ce9876d6f7df4a88415a1e787de619a78aee0bb: Status 404 returned error can't find the container with id 2adcb1333266ff5657594eed8ce9876d6f7df4a88415a1e787de619a78aee0bb Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.970962 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-fbdml"] Feb 18 12:11:31 crc kubenswrapper[4717]: I0218 12:11:31.984705 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-fbdml"] Feb 18 12:11:32 crc kubenswrapper[4717]: I0218 12:11:32.507607 4717 generic.go:334] "Generic (PLEG): container finished" podID="dffcf399-439f-4698-8a0a-b247675685be" containerID="01dacaa420fca87d38fe0c74c637a5668d160aad8937ed7ed3dfc36369ec572d" exitCode=0 Feb 18 12:11:32 crc kubenswrapper[4717]: I0218 12:11:32.507687 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-q9j9f" event={"ID":"dffcf399-439f-4698-8a0a-b247675685be","Type":"ContainerDied","Data":"01dacaa420fca87d38fe0c74c637a5668d160aad8937ed7ed3dfc36369ec572d"} Feb 18 12:11:32 crc kubenswrapper[4717]: I0218 12:11:32.508432 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-q9j9f" event={"ID":"dffcf399-439f-4698-8a0a-b247675685be","Type":"ContainerStarted","Data":"2adcb1333266ff5657594eed8ce9876d6f7df4a88415a1e787de619a78aee0bb"} Feb 18 12:11:33 crc kubenswrapper[4717]: I0218 12:11:33.050085 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e157b4-5706-45aa-b5bf-9c5bd109b501" path="/var/lib/kubelet/pods/a1e157b4-5706-45aa-b5bf-9c5bd109b501/volumes" Feb 18 12:11:33 crc kubenswrapper[4717]: I0218 12:11:33.525040 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-q9j9f" event={"ID":"dffcf399-439f-4698-8a0a-b247675685be","Type":"ContainerStarted","Data":"e1aa3795018027755dbcbbe97b11a7f141eb5c3add6268c60dd80377dbc7f83c"} Feb 18 12:11:33 crc kubenswrapper[4717]: I0218 12:11:33.525625 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:33 crc kubenswrapper[4717]: I0218 12:11:33.556057 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-q9j9f" podStartSLOduration=3.5560372129999998 podStartE2EDuration="3.556037213s" podCreationTimestamp="2026-02-18 12:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:11:33.551915995 +0000 UTC m=+1327.954017311" watchObservedRunningTime="2026-02-18 12:11:33.556037213 +0000 UTC m=+1327.958138529" Feb 18 12:11:37 crc kubenswrapper[4717]: I0218 12:11:37.895304 4717 scope.go:117] "RemoveContainer" containerID="8f5872dd2f0e2109801e0061a67ff15112c762a403cda89b478c8ecbeb983b9c" Feb 18 12:11:37 crc kubenswrapper[4717]: I0218 12:11:37.925506 4717 scope.go:117] "RemoveContainer" containerID="8e848b6e53f0edfc455ea0d32f2bc5665ee1f6ba97202a4edfd327911137410a" Feb 18 12:11:37 crc kubenswrapper[4717]: I0218 12:11:37.949174 4717 scope.go:117] "RemoveContainer" containerID="3f073da1afc165273f734b8dcf207477d269b699a4e8564e6f252267d5993712" Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.202486 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-q9j9f" Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.278396 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-brlgk"] Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.279287 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" podUID="2c422c4c-87bb-416a-9912-4d27d870af50" containerName="dnsmasq-dns" containerID="cri-o://81d0c447c4cd1bb2bf20c64b3b80c91bcde44dceb8277a48d7cd400f80466aeb" gracePeriod=10 Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.619589 4717 generic.go:334] "Generic (PLEG): container finished" podID="2c422c4c-87bb-416a-9912-4d27d870af50" containerID="81d0c447c4cd1bb2bf20c64b3b80c91bcde44dceb8277a48d7cd400f80466aeb" exitCode=0 Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.620280 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" event={"ID":"2c422c4c-87bb-416a-9912-4d27d870af50","Type":"ContainerDied","Data":"81d0c447c4cd1bb2bf20c64b3b80c91bcde44dceb8277a48d7cd400f80466aeb"} Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.781155 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.903971 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vv2f\" (UniqueName: \"kubernetes.io/projected/2c422c4c-87bb-416a-9912-4d27d870af50-kube-api-access-8vv2f\") pod \"2c422c4c-87bb-416a-9912-4d27d870af50\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.904446 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-sb\") pod \"2c422c4c-87bb-416a-9912-4d27d870af50\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.904629 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-config\") pod \"2c422c4c-87bb-416a-9912-4d27d870af50\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.904825 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-nb\") pod \"2c422c4c-87bb-416a-9912-4d27d870af50\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.904975 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-openstack-edpm-ipam\") pod \"2c422c4c-87bb-416a-9912-4d27d870af50\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.905098 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-swift-storage-0\") pod \"2c422c4c-87bb-416a-9912-4d27d870af50\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.905356 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-svc\") pod \"2c422c4c-87bb-416a-9912-4d27d870af50\" (UID: \"2c422c4c-87bb-416a-9912-4d27d870af50\") " Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.911738 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c422c4c-87bb-416a-9912-4d27d870af50-kube-api-access-8vv2f" (OuterVolumeSpecName: "kube-api-access-8vv2f") pod "2c422c4c-87bb-416a-9912-4d27d870af50" (UID: "2c422c4c-87bb-416a-9912-4d27d870af50"). InnerVolumeSpecName "kube-api-access-8vv2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.959199 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c422c4c-87bb-416a-9912-4d27d870af50" (UID: "2c422c4c-87bb-416a-9912-4d27d870af50"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.973523 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c422c4c-87bb-416a-9912-4d27d870af50" (UID: "2c422c4c-87bb-416a-9912-4d27d870af50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.980698 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "2c422c4c-87bb-416a-9912-4d27d870af50" (UID: "2c422c4c-87bb-416a-9912-4d27d870af50"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.991768 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c422c4c-87bb-416a-9912-4d27d870af50" (UID: "2c422c4c-87bb-416a-9912-4d27d870af50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:41 crc kubenswrapper[4717]: I0218 12:11:41.992015 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-config" (OuterVolumeSpecName: "config") pod "2c422c4c-87bb-416a-9912-4d27d870af50" (UID: "2c422c4c-87bb-416a-9912-4d27d870af50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.000016 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c422c4c-87bb-416a-9912-4d27d870af50" (UID: "2c422c4c-87bb-416a-9912-4d27d870af50"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.008125 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vv2f\" (UniqueName: \"kubernetes.io/projected/2c422c4c-87bb-416a-9912-4d27d870af50-kube-api-access-8vv2f\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.008169 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.008183 4717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.008192 4717 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.008203 4717 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.008212 4717 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.008222 4717 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c422c4c-87bb-416a-9912-4d27d870af50-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.632575 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" event={"ID":"2c422c4c-87bb-416a-9912-4d27d870af50","Type":"ContainerDied","Data":"b271b0c0cd023d9db32175e7befdaeba0e725ffd3d998c66eedefca70f3520fe"} Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.632663 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-brlgk" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.632681 4717 scope.go:117] "RemoveContainer" containerID="81d0c447c4cd1bb2bf20c64b3b80c91bcde44dceb8277a48d7cd400f80466aeb" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.663693 4717 scope.go:117] "RemoveContainer" containerID="90795cc6192194d54995a213d13b3c592a9542adadf79097a805b8c01a64e2ce" Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.666441 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-brlgk"] Feb 18 12:11:42 crc kubenswrapper[4717]: I0218 12:11:42.676987 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-brlgk"] Feb 18 12:11:43 crc kubenswrapper[4717]: I0218 12:11:43.049703 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c422c4c-87bb-416a-9912-4d27d870af50" path="/var/lib/kubelet/pods/2c422c4c-87bb-416a-9912-4d27d870af50/volumes" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.793425 4717 generic.go:334] "Generic (PLEG): container finished" podID="1c31df4b-bbb2-4bdf-9c36-db03b261067c" containerID="20d7b2db57ddc515e5a16fe89a70cfc90edb2080595524e0fe3649ed7d5f1838" exitCode=0 Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.793827 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c31df4b-bbb2-4bdf-9c36-db03b261067c","Type":"ContainerDied","Data":"20d7b2db57ddc515e5a16fe89a70cfc90edb2080595524e0fe3649ed7d5f1838"} Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.795815 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27"] Feb 18 12:11:54 crc kubenswrapper[4717]: E0218 12:11:54.796608 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e157b4-5706-45aa-b5bf-9c5bd109b501" containerName="init" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.796705 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e157b4-5706-45aa-b5bf-9c5bd109b501" containerName="init" Feb 18 12:11:54 crc kubenswrapper[4717]: E0218 12:11:54.796784 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e157b4-5706-45aa-b5bf-9c5bd109b501" containerName="dnsmasq-dns" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.796858 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e157b4-5706-45aa-b5bf-9c5bd109b501" containerName="dnsmasq-dns" Feb 18 12:11:54 crc kubenswrapper[4717]: E0218 12:11:54.796949 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c422c4c-87bb-416a-9912-4d27d870af50" containerName="init" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.797020 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c422c4c-87bb-416a-9912-4d27d870af50" containerName="init" Feb 18 12:11:54 crc kubenswrapper[4717]: E0218 12:11:54.797107 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c422c4c-87bb-416a-9912-4d27d870af50" containerName="dnsmasq-dns" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.797174 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c422c4c-87bb-416a-9912-4d27d870af50" containerName="dnsmasq-dns" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.797578 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e157b4-5706-45aa-b5bf-9c5bd109b501" containerName="dnsmasq-dns" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.797700 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c422c4c-87bb-416a-9912-4d27d870af50" containerName="dnsmasq-dns" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.798937 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.800655 4717 generic.go:334] "Generic (PLEG): container finished" podID="57c90bed-ebc6-4053-b92b-1622edda048a" containerID="b4af9e7aefd2cb7fa11fd0256cb59aa4aa8bc664064ca667b46558aafe4b4acf" exitCode=0 Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.800723 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"57c90bed-ebc6-4053-b92b-1622edda048a","Type":"ContainerDied","Data":"b4af9e7aefd2cb7fa11fd0256cb59aa4aa8bc664064ca667b46558aafe4b4acf"} Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.801514 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.802986 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.802988 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.802987 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.821696 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27"] Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.987874 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.988141 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.988430 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:54 crc kubenswrapper[4717]: I0218 12:11:54.988672 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hcn\" (UniqueName: \"kubernetes.io/projected/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-kube-api-access-x5hcn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.090630 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.090746 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.090834 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5hcn\" (UniqueName: \"kubernetes.io/projected/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-kube-api-access-x5hcn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.090892 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.096801 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.096819 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.098287 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.108107 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5hcn\" (UniqueName: \"kubernetes.io/projected/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-kube-api-access-x5hcn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.109171 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.747788 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27"] Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.822576 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"57c90bed-ebc6-4053-b92b-1622edda048a","Type":"ContainerStarted","Data":"f0c09684cb0e5b44bfbd03e768d16d0620661d25197822073ce981f6ef5f6a8d"} Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.824564 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.827996 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c31df4b-bbb2-4bdf-9c36-db03b261067c","Type":"ContainerStarted","Data":"b04dba5df10bfe9ff539611f6a7bbf84891cab7ed5611c5fedecf00b8937b99b"} Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.828239 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.830165 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" event={"ID":"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456","Type":"ContainerStarted","Data":"0fe48b18fc2c6f1d4916e86b6cb5aafa182aaaea92c551f749b2da6b545adeaa"} Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.873463 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.873438525 podStartE2EDuration="37.873438525s" podCreationTimestamp="2026-02-18 12:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:11:55.848064007 +0000 UTC m=+1350.250165323" watchObservedRunningTime="2026-02-18 12:11:55.873438525 +0000 UTC m=+1350.275539841" Feb 18 12:11:55 crc kubenswrapper[4717]: I0218 12:11:55.899652 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.899613045 podStartE2EDuration="36.899613045s" podCreationTimestamp="2026-02-18 12:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:11:55.881218518 +0000 UTC m=+1350.283319834" watchObservedRunningTime="2026-02-18 12:11:55.899613045 +0000 UTC m=+1350.301714361" Feb 18 12:12:07 crc kubenswrapper[4717]: I0218 12:12:07.997574 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" event={"ID":"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456","Type":"ContainerStarted","Data":"13ef7eb226ebe1dd73394ab4ec4fc96a17799c5c8bea4dc82a373999a3c0f462"} Feb 18 12:12:08 crc kubenswrapper[4717]: I0218 12:12:08.026656 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" podStartSLOduration=2.939799548 podStartE2EDuration="14.026616756s" podCreationTimestamp="2026-02-18 12:11:54 +0000 UTC" firstStartedPulling="2026-02-18 12:11:55.751283694 +0000 UTC m=+1350.153385000" lastFinishedPulling="2026-02-18 12:12:06.838100892 +0000 UTC m=+1361.240202208" observedRunningTime="2026-02-18 12:12:08.024003561 +0000 UTC m=+1362.426104887" watchObservedRunningTime="2026-02-18 12:12:08.026616756 +0000 UTC m=+1362.428718072" Feb 18 12:12:09 crc kubenswrapper[4717]: I0218 12:12:09.184497 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 12:12:09 crc kubenswrapper[4717]: I0218 12:12:09.821608 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 12:12:18 crc kubenswrapper[4717]: I0218 12:12:18.128425 4717 generic.go:334] "Generic (PLEG): container finished" podID="74c7ea9f-0f71-44a4-b3cb-8fd20e90f456" containerID="13ef7eb226ebe1dd73394ab4ec4fc96a17799c5c8bea4dc82a373999a3c0f462" exitCode=0 Feb 18 12:12:18 crc kubenswrapper[4717]: I0218 12:12:18.128518 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" event={"ID":"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456","Type":"ContainerDied","Data":"13ef7eb226ebe1dd73394ab4ec4fc96a17799c5c8bea4dc82a373999a3c0f462"} Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.576429 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.707692 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5hcn\" (UniqueName: \"kubernetes.io/projected/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-kube-api-access-x5hcn\") pod \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.707784 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-inventory\") pod \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.707923 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-repo-setup-combined-ca-bundle\") pod \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.708037 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-ssh-key-openstack-edpm-ipam\") pod \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\" (UID: \"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456\") " Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.717177 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-kube-api-access-x5hcn" (OuterVolumeSpecName: "kube-api-access-x5hcn") pod "74c7ea9f-0f71-44a4-b3cb-8fd20e90f456" (UID: "74c7ea9f-0f71-44a4-b3cb-8fd20e90f456"). InnerVolumeSpecName "kube-api-access-x5hcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.718446 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "74c7ea9f-0f71-44a4-b3cb-8fd20e90f456" (UID: "74c7ea9f-0f71-44a4-b3cb-8fd20e90f456"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.742517 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "74c7ea9f-0f71-44a4-b3cb-8fd20e90f456" (UID: "74c7ea9f-0f71-44a4-b3cb-8fd20e90f456"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.745011 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-inventory" (OuterVolumeSpecName: "inventory") pod "74c7ea9f-0f71-44a4-b3cb-8fd20e90f456" (UID: "74c7ea9f-0f71-44a4-b3cb-8fd20e90f456"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.811885 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5hcn\" (UniqueName: \"kubernetes.io/projected/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-kube-api-access-x5hcn\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.811953 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.811968 4717 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:19 crc kubenswrapper[4717]: I0218 12:12:19.811980 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74c7ea9f-0f71-44a4-b3cb-8fd20e90f456-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.152152 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" event={"ID":"74c7ea9f-0f71-44a4-b3cb-8fd20e90f456","Type":"ContainerDied","Data":"0fe48b18fc2c6f1d4916e86b6cb5aafa182aaaea92c551f749b2da6b545adeaa"} Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.152210 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe48b18fc2c6f1d4916e86b6cb5aafa182aaaea92c551f749b2da6b545adeaa" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.152325 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.283404 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9"] Feb 18 12:12:20 crc kubenswrapper[4717]: E0218 12:12:20.283981 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c7ea9f-0f71-44a4-b3cb-8fd20e90f456" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.284010 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c7ea9f-0f71-44a4-b3cb-8fd20e90f456" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.284338 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c7ea9f-0f71-44a4-b3cb-8fd20e90f456" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.285204 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.287488 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.287907 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.287928 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.288633 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.294545 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9"] Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.425127 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tjgb9\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.425591 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b72kn\" (UniqueName: \"kubernetes.io/projected/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-kube-api-access-b72kn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tjgb9\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.425725 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tjgb9\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.528110 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tjgb9\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.528310 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tjgb9\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.528417 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b72kn\" (UniqueName: \"kubernetes.io/projected/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-kube-api-access-b72kn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tjgb9\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.534043 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tjgb9\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.540962 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tjgb9\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.547931 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b72kn\" (UniqueName: \"kubernetes.io/projected/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-kube-api-access-b72kn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tjgb9\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:20 crc kubenswrapper[4717]: I0218 12:12:20.610090 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:21 crc kubenswrapper[4717]: I0218 12:12:21.150898 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9"] Feb 18 12:12:21 crc kubenswrapper[4717]: I0218 12:12:21.166194 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" event={"ID":"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219","Type":"ContainerStarted","Data":"9ebdc3a63eac83948b25b0d9a4438b4806534a9caa0f4237c7a5bae03189e751"} Feb 18 12:12:22 crc kubenswrapper[4717]: I0218 12:12:22.180379 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" event={"ID":"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219","Type":"ContainerStarted","Data":"f5c0dea73b0aa5c5ca3aaca936d6bdfb6c47e3300e6afd59deff811f503b7b28"} Feb 18 12:12:22 crc kubenswrapper[4717]: I0218 12:12:22.207051 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" podStartSLOduration=1.806260971 podStartE2EDuration="2.207013426s" podCreationTimestamp="2026-02-18 12:12:20 +0000 UTC" firstStartedPulling="2026-02-18 12:12:21.155039696 +0000 UTC m=+1375.557141002" lastFinishedPulling="2026-02-18 12:12:21.555792151 +0000 UTC m=+1375.957893457" observedRunningTime="2026-02-18 12:12:22.205542784 +0000 UTC m=+1376.607644100" watchObservedRunningTime="2026-02-18 12:12:22.207013426 +0000 UTC m=+1376.609114762" Feb 18 12:12:25 crc kubenswrapper[4717]: I0218 12:12:25.213698 4717 generic.go:334] "Generic (PLEG): container finished" podID="51e7bb49-4d7d-44a3-bb44-dcf18ac0f219" containerID="f5c0dea73b0aa5c5ca3aaca936d6bdfb6c47e3300e6afd59deff811f503b7b28" exitCode=0 Feb 18 12:12:25 crc kubenswrapper[4717]: I0218 12:12:25.213826 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" event={"ID":"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219","Type":"ContainerDied","Data":"f5c0dea73b0aa5c5ca3aaca936d6bdfb6c47e3300e6afd59deff811f503b7b28"} Feb 18 12:12:26 crc kubenswrapper[4717]: I0218 12:12:26.668562 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:26 crc kubenswrapper[4717]: I0218 12:12:26.768681 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-ssh-key-openstack-edpm-ipam\") pod \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " Feb 18 12:12:26 crc kubenswrapper[4717]: I0218 12:12:26.768903 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b72kn\" (UniqueName: \"kubernetes.io/projected/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-kube-api-access-b72kn\") pod \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " Feb 18 12:12:26 crc kubenswrapper[4717]: I0218 12:12:26.769069 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-inventory\") pod \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\" (UID: \"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219\") " Feb 18 12:12:26 crc kubenswrapper[4717]: I0218 12:12:26.776290 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-kube-api-access-b72kn" (OuterVolumeSpecName: "kube-api-access-b72kn") pod "51e7bb49-4d7d-44a3-bb44-dcf18ac0f219" (UID: "51e7bb49-4d7d-44a3-bb44-dcf18ac0f219"). InnerVolumeSpecName "kube-api-access-b72kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:12:26 crc kubenswrapper[4717]: I0218 12:12:26.803524 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-inventory" (OuterVolumeSpecName: "inventory") pod "51e7bb49-4d7d-44a3-bb44-dcf18ac0f219" (UID: "51e7bb49-4d7d-44a3-bb44-dcf18ac0f219"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:12:26 crc kubenswrapper[4717]: I0218 12:12:26.807466 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "51e7bb49-4d7d-44a3-bb44-dcf18ac0f219" (UID: "51e7bb49-4d7d-44a3-bb44-dcf18ac0f219"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:12:26 crc kubenswrapper[4717]: I0218 12:12:26.875106 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:26 crc kubenswrapper[4717]: I0218 12:12:26.875144 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:26 crc kubenswrapper[4717]: I0218 12:12:26.875158 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b72kn\" (UniqueName: \"kubernetes.io/projected/51e7bb49-4d7d-44a3-bb44-dcf18ac0f219-kube-api-access-b72kn\") on node \"crc\" DevicePath \"\"" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.242570 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" event={"ID":"51e7bb49-4d7d-44a3-bb44-dcf18ac0f219","Type":"ContainerDied","Data":"9ebdc3a63eac83948b25b0d9a4438b4806534a9caa0f4237c7a5bae03189e751"} Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.242641 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ebdc3a63eac83948b25b0d9a4438b4806534a9caa0f4237c7a5bae03189e751" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.242642 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tjgb9" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.325730 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg"] Feb 18 12:12:27 crc kubenswrapper[4717]: E0218 12:12:27.326590 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e7bb49-4d7d-44a3-bb44-dcf18ac0f219" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.326621 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e7bb49-4d7d-44a3-bb44-dcf18ac0f219" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.326891 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e7bb49-4d7d-44a3-bb44-dcf18ac0f219" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.327882 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.330514 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.330537 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.330893 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.331115 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.338127 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg"] Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.487408 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.487743 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96tl\" (UniqueName: \"kubernetes.io/projected/ca940f40-1894-4b6a-bb57-acac20cd47f3-kube-api-access-p96tl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.487918 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.488015 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.590833 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.590902 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p96tl\" (UniqueName: \"kubernetes.io/projected/ca940f40-1894-4b6a-bb57-acac20cd47f3-kube-api-access-p96tl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.590992 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.591024 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.595579 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.596825 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.597740 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.617068 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p96tl\" (UniqueName: \"kubernetes.io/projected/ca940f40-1894-4b6a-bb57-acac20cd47f3-kube-api-access-p96tl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:27 crc kubenswrapper[4717]: I0218 12:12:27.651175 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:12:28 crc kubenswrapper[4717]: I0218 12:12:28.241552 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg"] Feb 18 12:12:28 crc kubenswrapper[4717]: I0218 12:12:28.257016 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" event={"ID":"ca940f40-1894-4b6a-bb57-acac20cd47f3","Type":"ContainerStarted","Data":"970c58bddfb135966b36946afe45ca991c13826662607632e2dd2677f30cd98f"} Feb 18 12:12:29 crc kubenswrapper[4717]: I0218 12:12:29.271182 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" event={"ID":"ca940f40-1894-4b6a-bb57-acac20cd47f3","Type":"ContainerStarted","Data":"82f84be545ddeccf6b6bb58ba8b7314e751a1b9ccf7ca360373f8813c19b0c08"} Feb 18 12:12:29 crc kubenswrapper[4717]: I0218 12:12:29.294166 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" podStartSLOduration=1.903246905 podStartE2EDuration="2.294138748s" podCreationTimestamp="2026-02-18 12:12:27 +0000 UTC" firstStartedPulling="2026-02-18 12:12:28.242044524 +0000 UTC m=+1382.644145840" lastFinishedPulling="2026-02-18 12:12:28.632936367 +0000 UTC m=+1383.035037683" observedRunningTime="2026-02-18 12:12:29.285638654 +0000 UTC m=+1383.687739970" watchObservedRunningTime="2026-02-18 12:12:29.294138748 +0000 UTC m=+1383.696240064" Feb 18 12:12:38 crc kubenswrapper[4717]: I0218 12:12:38.131741 4717 scope.go:117] "RemoveContainer" containerID="ace16603515589089c74f1a1a02687c0ba35342b5596e46465e01d7deda1693e" Feb 18 12:13:12 crc kubenswrapper[4717]: I0218 12:13:12.773749 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:13:12 crc kubenswrapper[4717]: I0218 12:13:12.774662 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:13:38 crc kubenswrapper[4717]: I0218 12:13:38.216155 4717 scope.go:117] "RemoveContainer" containerID="bf9905cd8cf1d1b69a90269374cd6e6f0fa0a3fb556a86ffe9312a10ef3fcb65" Feb 18 12:13:42 crc kubenswrapper[4717]: I0218 12:13:42.773235 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:13:42 crc kubenswrapper[4717]: I0218 12:13:42.774206 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.458382 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wptmd"] Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.461493 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.491311 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wptmd"] Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.546582 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmtgn\" (UniqueName: \"kubernetes.io/projected/45628e23-01d7-4be8-81f8-dc42402be647-kube-api-access-pmtgn\") pod \"community-operators-wptmd\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.547824 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-catalog-content\") pod \"community-operators-wptmd\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.547950 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-utilities\") pod \"community-operators-wptmd\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.650581 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmtgn\" (UniqueName: \"kubernetes.io/projected/45628e23-01d7-4be8-81f8-dc42402be647-kube-api-access-pmtgn\") pod \"community-operators-wptmd\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.650652 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-catalog-content\") pod \"community-operators-wptmd\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.650687 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-utilities\") pod \"community-operators-wptmd\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.651848 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-utilities\") pod \"community-operators-wptmd\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.652115 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-catalog-content\") pod \"community-operators-wptmd\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.677152 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmtgn\" (UniqueName: \"kubernetes.io/projected/45628e23-01d7-4be8-81f8-dc42402be647-kube-api-access-pmtgn\") pod \"community-operators-wptmd\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:00 crc kubenswrapper[4717]: I0218 12:14:00.783888 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:01 crc kubenswrapper[4717]: I0218 12:14:01.324791 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wptmd"] Feb 18 12:14:02 crc kubenswrapper[4717]: I0218 12:14:02.293425 4717 generic.go:334] "Generic (PLEG): container finished" podID="45628e23-01d7-4be8-81f8-dc42402be647" containerID="f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03" exitCode=0 Feb 18 12:14:02 crc kubenswrapper[4717]: I0218 12:14:02.293509 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wptmd" event={"ID":"45628e23-01d7-4be8-81f8-dc42402be647","Type":"ContainerDied","Data":"f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03"} Feb 18 12:14:02 crc kubenswrapper[4717]: I0218 12:14:02.293862 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wptmd" event={"ID":"45628e23-01d7-4be8-81f8-dc42402be647","Type":"ContainerStarted","Data":"27463955d8ff12573dc287f91d1d01b3d9b592fa46f91d8e334601159230b21a"} Feb 18 12:14:03 crc kubenswrapper[4717]: I0218 12:14:03.310314 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wptmd" event={"ID":"45628e23-01d7-4be8-81f8-dc42402be647","Type":"ContainerStarted","Data":"b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b"} Feb 18 12:14:04 crc kubenswrapper[4717]: I0218 12:14:04.323184 4717 generic.go:334] "Generic (PLEG): container finished" podID="45628e23-01d7-4be8-81f8-dc42402be647" containerID="b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b" exitCode=0 Feb 18 12:14:04 crc kubenswrapper[4717]: I0218 12:14:04.323328 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wptmd" event={"ID":"45628e23-01d7-4be8-81f8-dc42402be647","Type":"ContainerDied","Data":"b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b"} Feb 18 12:14:05 crc kubenswrapper[4717]: I0218 12:14:05.340535 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wptmd" event={"ID":"45628e23-01d7-4be8-81f8-dc42402be647","Type":"ContainerStarted","Data":"6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b"} Feb 18 12:14:05 crc kubenswrapper[4717]: I0218 12:14:05.368755 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wptmd" podStartSLOduration=2.712897362 podStartE2EDuration="5.36872672s" podCreationTimestamp="2026-02-18 12:14:00 +0000 UTC" firstStartedPulling="2026-02-18 12:14:02.29552403 +0000 UTC m=+1476.697625346" lastFinishedPulling="2026-02-18 12:14:04.951353388 +0000 UTC m=+1479.353454704" observedRunningTime="2026-02-18 12:14:05.358314191 +0000 UTC m=+1479.760415527" watchObservedRunningTime="2026-02-18 12:14:05.36872672 +0000 UTC m=+1479.770828036" Feb 18 12:14:10 crc kubenswrapper[4717]: I0218 12:14:10.784683 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:10 crc kubenswrapper[4717]: I0218 12:14:10.785848 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:10 crc kubenswrapper[4717]: I0218 12:14:10.840285 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:11 crc kubenswrapper[4717]: I0218 12:14:11.453512 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:11 crc kubenswrapper[4717]: I0218 12:14:11.514568 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wptmd"] Feb 18 12:14:12 crc kubenswrapper[4717]: I0218 12:14:12.773688 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:14:12 crc kubenswrapper[4717]: I0218 12:14:12.773765 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:14:12 crc kubenswrapper[4717]: I0218 12:14:12.773825 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:14:12 crc kubenswrapper[4717]: I0218 12:14:12.774737 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8029dd99738cdaa1b1753862d62d575af02645633ab25bd8d0586f5ebdffe632"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:14:12 crc kubenswrapper[4717]: I0218 12:14:12.775289 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://8029dd99738cdaa1b1753862d62d575af02645633ab25bd8d0586f5ebdffe632" gracePeriod=600 Feb 18 12:14:13 crc kubenswrapper[4717]: I0218 12:14:13.432290 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="8029dd99738cdaa1b1753862d62d575af02645633ab25bd8d0586f5ebdffe632" exitCode=0 Feb 18 12:14:13 crc kubenswrapper[4717]: I0218 12:14:13.432378 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"8029dd99738cdaa1b1753862d62d575af02645633ab25bd8d0586f5ebdffe632"} Feb 18 12:14:13 crc kubenswrapper[4717]: I0218 12:14:13.432802 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c"} Feb 18 12:14:13 crc kubenswrapper[4717]: I0218 12:14:13.432825 4717 scope.go:117] "RemoveContainer" containerID="8619a78ac97f793404bae81465977701fb9cf7482a56c2df47cd47e2df2d8754" Feb 18 12:14:13 crc kubenswrapper[4717]: I0218 12:14:13.433059 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wptmd" podUID="45628e23-01d7-4be8-81f8-dc42402be647" containerName="registry-server" containerID="cri-o://6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b" gracePeriod=2 Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.206862 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.313064 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-catalog-content\") pod \"45628e23-01d7-4be8-81f8-dc42402be647\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.313743 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmtgn\" (UniqueName: \"kubernetes.io/projected/45628e23-01d7-4be8-81f8-dc42402be647-kube-api-access-pmtgn\") pod \"45628e23-01d7-4be8-81f8-dc42402be647\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.313987 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-utilities\") pod \"45628e23-01d7-4be8-81f8-dc42402be647\" (UID: \"45628e23-01d7-4be8-81f8-dc42402be647\") " Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.315431 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-utilities" (OuterVolumeSpecName: "utilities") pod "45628e23-01d7-4be8-81f8-dc42402be647" (UID: "45628e23-01d7-4be8-81f8-dc42402be647"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.320445 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45628e23-01d7-4be8-81f8-dc42402be647-kube-api-access-pmtgn" (OuterVolumeSpecName: "kube-api-access-pmtgn") pod "45628e23-01d7-4be8-81f8-dc42402be647" (UID: "45628e23-01d7-4be8-81f8-dc42402be647"). InnerVolumeSpecName "kube-api-access-pmtgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.369906 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45628e23-01d7-4be8-81f8-dc42402be647" (UID: "45628e23-01d7-4be8-81f8-dc42402be647"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.417015 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.417049 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45628e23-01d7-4be8-81f8-dc42402be647-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.417068 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmtgn\" (UniqueName: \"kubernetes.io/projected/45628e23-01d7-4be8-81f8-dc42402be647-kube-api-access-pmtgn\") on node \"crc\" DevicePath \"\"" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.459711 4717 generic.go:334] "Generic (PLEG): container finished" podID="45628e23-01d7-4be8-81f8-dc42402be647" containerID="6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b" exitCode=0 Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.459791 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wptmd" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.459787 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wptmd" event={"ID":"45628e23-01d7-4be8-81f8-dc42402be647","Type":"ContainerDied","Data":"6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b"} Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.459918 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wptmd" event={"ID":"45628e23-01d7-4be8-81f8-dc42402be647","Type":"ContainerDied","Data":"27463955d8ff12573dc287f91d1d01b3d9b592fa46f91d8e334601159230b21a"} Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.459945 4717 scope.go:117] "RemoveContainer" containerID="6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.490631 4717 scope.go:117] "RemoveContainer" containerID="b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.506598 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wptmd"] Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.517945 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wptmd"] Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.518407 4717 scope.go:117] "RemoveContainer" containerID="f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.565708 4717 scope.go:117] "RemoveContainer" containerID="6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b" Feb 18 12:14:15 crc kubenswrapper[4717]: E0218 12:14:15.566314 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b\": container with ID starting with 6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b not found: ID does not exist" containerID="6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.566384 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b"} err="failed to get container status \"6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b\": rpc error: code = NotFound desc = could not find container \"6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b\": container with ID starting with 6e89ce4eddfeff01b23b9fc0229e55dc15f6ad6a5b93ead7bddd18f27464c12b not found: ID does not exist" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.566425 4717 scope.go:117] "RemoveContainer" containerID="b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b" Feb 18 12:14:15 crc kubenswrapper[4717]: E0218 12:14:15.566891 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b\": container with ID starting with b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b not found: ID does not exist" containerID="b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.566963 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b"} err="failed to get container status \"b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b\": rpc error: code = NotFound desc = could not find container \"b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b\": container with ID starting with b8b86ebc1deef2f72e811f6db737675749d7844f9575ccd061aab6557631552b not found: ID does not exist" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.567018 4717 scope.go:117] "RemoveContainer" containerID="f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03" Feb 18 12:14:15 crc kubenswrapper[4717]: E0218 12:14:15.568134 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03\": container with ID starting with f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03 not found: ID does not exist" containerID="f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03" Feb 18 12:14:15 crc kubenswrapper[4717]: I0218 12:14:15.568210 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03"} err="failed to get container status \"f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03\": rpc error: code = NotFound desc = could not find container \"f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03\": container with ID starting with f847ea1bd2e9eee43eb2c90c1bc661cc507d452db5f53709600387254883fa03 not found: ID does not exist" Feb 18 12:14:17 crc kubenswrapper[4717]: I0218 12:14:17.047943 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45628e23-01d7-4be8-81f8-dc42402be647" path="/var/lib/kubelet/pods/45628e23-01d7-4be8-81f8-dc42402be647/volumes" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.611605 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pbgdh"] Feb 18 12:14:29 crc kubenswrapper[4717]: E0218 12:14:29.612832 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45628e23-01d7-4be8-81f8-dc42402be647" containerName="registry-server" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.612850 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="45628e23-01d7-4be8-81f8-dc42402be647" containerName="registry-server" Feb 18 12:14:29 crc kubenswrapper[4717]: E0218 12:14:29.612868 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45628e23-01d7-4be8-81f8-dc42402be647" containerName="extract-content" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.612874 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="45628e23-01d7-4be8-81f8-dc42402be647" containerName="extract-content" Feb 18 12:14:29 crc kubenswrapper[4717]: E0218 12:14:29.612888 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45628e23-01d7-4be8-81f8-dc42402be647" containerName="extract-utilities" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.612894 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="45628e23-01d7-4be8-81f8-dc42402be647" containerName="extract-utilities" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.613106 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="45628e23-01d7-4be8-81f8-dc42402be647" containerName="registry-server" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.614629 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.635547 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pbgdh"] Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.651430 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-utilities\") pod \"certified-operators-pbgdh\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.651843 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-catalog-content\") pod \"certified-operators-pbgdh\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.651911 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k549\" (UniqueName: \"kubernetes.io/projected/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-kube-api-access-9k549\") pod \"certified-operators-pbgdh\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.754988 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-catalog-content\") pod \"certified-operators-pbgdh\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.755127 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k549\" (UniqueName: \"kubernetes.io/projected/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-kube-api-access-9k549\") pod \"certified-operators-pbgdh\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.755374 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-utilities\") pod \"certified-operators-pbgdh\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.755767 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-catalog-content\") pod \"certified-operators-pbgdh\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.756082 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-utilities\") pod \"certified-operators-pbgdh\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.780163 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k549\" (UniqueName: \"kubernetes.io/projected/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-kube-api-access-9k549\") pod \"certified-operators-pbgdh\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:29 crc kubenswrapper[4717]: I0218 12:14:29.958318 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:30 crc kubenswrapper[4717]: I0218 12:14:30.482813 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pbgdh"] Feb 18 12:14:30 crc kubenswrapper[4717]: I0218 12:14:30.669572 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbgdh" event={"ID":"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301","Type":"ContainerStarted","Data":"c0b347cb9b193c0746a57a1202b539dbc134ff754298701d607709b2ad199dc3"} Feb 18 12:14:31 crc kubenswrapper[4717]: I0218 12:14:31.684472 4717 generic.go:334] "Generic (PLEG): container finished" podID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerID="d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c" exitCode=0 Feb 18 12:14:31 crc kubenswrapper[4717]: I0218 12:14:31.684550 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbgdh" event={"ID":"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301","Type":"ContainerDied","Data":"d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c"} Feb 18 12:14:33 crc kubenswrapper[4717]: I0218 12:14:33.708313 4717 generic.go:334] "Generic (PLEG): container finished" podID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerID="896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a" exitCode=0 Feb 18 12:14:33 crc kubenswrapper[4717]: I0218 12:14:33.708627 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbgdh" event={"ID":"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301","Type":"ContainerDied","Data":"896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a"} Feb 18 12:14:34 crc kubenswrapper[4717]: I0218 12:14:34.720754 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbgdh" event={"ID":"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301","Type":"ContainerStarted","Data":"946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c"} Feb 18 12:14:34 crc kubenswrapper[4717]: I0218 12:14:34.746902 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pbgdh" podStartSLOduration=3.043985933 podStartE2EDuration="5.74688014s" podCreationTimestamp="2026-02-18 12:14:29 +0000 UTC" firstStartedPulling="2026-02-18 12:14:31.687577908 +0000 UTC m=+1506.089679234" lastFinishedPulling="2026-02-18 12:14:34.390472125 +0000 UTC m=+1508.792573441" observedRunningTime="2026-02-18 12:14:34.745918472 +0000 UTC m=+1509.148019808" watchObservedRunningTime="2026-02-18 12:14:34.74688014 +0000 UTC m=+1509.148981456" Feb 18 12:14:36 crc kubenswrapper[4717]: I0218 12:14:36.996075 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8jv5r"] Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.004274 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.013559 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jv5r"] Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.121557 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xwfv\" (UniqueName: \"kubernetes.io/projected/909c6c49-fb82-4dba-8cae-4c6a256e5c76-kube-api-access-8xwfv\") pod \"redhat-marketplace-8jv5r\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.121624 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-catalog-content\") pod \"redhat-marketplace-8jv5r\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.122091 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-utilities\") pod \"redhat-marketplace-8jv5r\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.224728 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-utilities\") pod \"redhat-marketplace-8jv5r\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.224913 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xwfv\" (UniqueName: \"kubernetes.io/projected/909c6c49-fb82-4dba-8cae-4c6a256e5c76-kube-api-access-8xwfv\") pod \"redhat-marketplace-8jv5r\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.224960 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-catalog-content\") pod \"redhat-marketplace-8jv5r\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.225332 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-utilities\") pod \"redhat-marketplace-8jv5r\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.225686 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-catalog-content\") pod \"redhat-marketplace-8jv5r\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.252764 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xwfv\" (UniqueName: \"kubernetes.io/projected/909c6c49-fb82-4dba-8cae-4c6a256e5c76-kube-api-access-8xwfv\") pod \"redhat-marketplace-8jv5r\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.350358 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:37 crc kubenswrapper[4717]: I0218 12:14:37.937989 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jv5r"] Feb 18 12:14:38 crc kubenswrapper[4717]: I0218 12:14:38.294062 4717 scope.go:117] "RemoveContainer" containerID="820d6da3c878b9ba56840bb1eec2fc47ed8e071e39358826981f678e56200250" Feb 18 12:14:38 crc kubenswrapper[4717]: I0218 12:14:38.322058 4717 scope.go:117] "RemoveContainer" containerID="8f2107431594a373c859cc33bc39621351c20b6ce045ef1c9f09d77a9e63fa8b" Feb 18 12:14:38 crc kubenswrapper[4717]: I0218 12:14:38.348645 4717 scope.go:117] "RemoveContainer" containerID="5eff0b07a7121912c23ed3875ae3f77cec26fc44811d7c528e40b0c1997af65d" Feb 18 12:14:38 crc kubenswrapper[4717]: I0218 12:14:38.373457 4717 scope.go:117] "RemoveContainer" containerID="cdd3fc40499b9d917654397de25d20ab1cd4ae675735a4b0b9468346d147399e" Feb 18 12:14:38 crc kubenswrapper[4717]: I0218 12:14:38.779072 4717 generic.go:334] "Generic (PLEG): container finished" podID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerID="4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f" exitCode=0 Feb 18 12:14:38 crc kubenswrapper[4717]: I0218 12:14:38.779132 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jv5r" event={"ID":"909c6c49-fb82-4dba-8cae-4c6a256e5c76","Type":"ContainerDied","Data":"4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f"} Feb 18 12:14:38 crc kubenswrapper[4717]: I0218 12:14:38.779165 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jv5r" event={"ID":"909c6c49-fb82-4dba-8cae-4c6a256e5c76","Type":"ContainerStarted","Data":"1b6ae0ea4c9aa60de0990bf51734e75aec99fd31cf757c91efe4e09198d10998"} Feb 18 12:14:39 crc kubenswrapper[4717]: I0218 12:14:39.959506 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:39 crc kubenswrapper[4717]: I0218 12:14:39.960197 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:40 crc kubenswrapper[4717]: I0218 12:14:40.012204 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:40 crc kubenswrapper[4717]: I0218 12:14:40.808176 4717 generic.go:334] "Generic (PLEG): container finished" podID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerID="2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606" exitCode=0 Feb 18 12:14:40 crc kubenswrapper[4717]: I0218 12:14:40.808236 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jv5r" event={"ID":"909c6c49-fb82-4dba-8cae-4c6a256e5c76","Type":"ContainerDied","Data":"2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606"} Feb 18 12:14:40 crc kubenswrapper[4717]: I0218 12:14:40.871359 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:41 crc kubenswrapper[4717]: I0218 12:14:41.380697 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pbgdh"] Feb 18 12:14:42 crc kubenswrapper[4717]: I0218 12:14:42.846912 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jv5r" event={"ID":"909c6c49-fb82-4dba-8cae-4c6a256e5c76","Type":"ContainerStarted","Data":"be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6"} Feb 18 12:14:42 crc kubenswrapper[4717]: I0218 12:14:42.847136 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pbgdh" podUID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerName="registry-server" containerID="cri-o://946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c" gracePeriod=2 Feb 18 12:14:42 crc kubenswrapper[4717]: I0218 12:14:42.874132 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8jv5r" podStartSLOduration=3.911440359 podStartE2EDuration="6.874052529s" podCreationTimestamp="2026-02-18 12:14:36 +0000 UTC" firstStartedPulling="2026-02-18 12:14:38.782171234 +0000 UTC m=+1513.184272550" lastFinishedPulling="2026-02-18 12:14:41.744783404 +0000 UTC m=+1516.146884720" observedRunningTime="2026-02-18 12:14:42.871230088 +0000 UTC m=+1517.273331404" watchObservedRunningTime="2026-02-18 12:14:42.874052529 +0000 UTC m=+1517.276153845" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.422380 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.597306 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-utilities\") pod \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.597784 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k549\" (UniqueName: \"kubernetes.io/projected/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-kube-api-access-9k549\") pod \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.597909 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-catalog-content\") pod \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\" (UID: \"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301\") " Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.598060 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-utilities" (OuterVolumeSpecName: "utilities") pod "a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" (UID: "a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.598586 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.606234 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-kube-api-access-9k549" (OuterVolumeSpecName: "kube-api-access-9k549") pod "a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" (UID: "a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301"). InnerVolumeSpecName "kube-api-access-9k549". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.650969 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" (UID: "a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.701106 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.701152 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k549\" (UniqueName: \"kubernetes.io/projected/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301-kube-api-access-9k549\") on node \"crc\" DevicePath \"\"" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.860062 4717 generic.go:334] "Generic (PLEG): container finished" podID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerID="946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c" exitCode=0 Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.860269 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbgdh" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.860228 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbgdh" event={"ID":"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301","Type":"ContainerDied","Data":"946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c"} Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.860364 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbgdh" event={"ID":"a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301","Type":"ContainerDied","Data":"c0b347cb9b193c0746a57a1202b539dbc134ff754298701d607709b2ad199dc3"} Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.860391 4717 scope.go:117] "RemoveContainer" containerID="946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.889689 4717 scope.go:117] "RemoveContainer" containerID="896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.902278 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pbgdh"] Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.914427 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pbgdh"] Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.940634 4717 scope.go:117] "RemoveContainer" containerID="d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.973331 4717 scope.go:117] "RemoveContainer" containerID="946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c" Feb 18 12:14:43 crc kubenswrapper[4717]: E0218 12:14:43.974031 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c\": container with ID starting with 946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c not found: ID does not exist" containerID="946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.974079 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c"} err="failed to get container status \"946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c\": rpc error: code = NotFound desc = could not find container \"946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c\": container with ID starting with 946d65452fd0951c3ef8fb2b57851b06810af5abaa48ef093420fd8ee155ee7c not found: ID does not exist" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.974105 4717 scope.go:117] "RemoveContainer" containerID="896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a" Feb 18 12:14:43 crc kubenswrapper[4717]: E0218 12:14:43.974611 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a\": container with ID starting with 896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a not found: ID does not exist" containerID="896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.974635 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a"} err="failed to get container status \"896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a\": rpc error: code = NotFound desc = could not find container \"896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a\": container with ID starting with 896d8d2795fbbf607e899d576b9a4d29d4817d07ae3a7b23e1b64f2b7752e80a not found: ID does not exist" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.974648 4717 scope.go:117] "RemoveContainer" containerID="d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c" Feb 18 12:14:43 crc kubenswrapper[4717]: E0218 12:14:43.974905 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c\": container with ID starting with d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c not found: ID does not exist" containerID="d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c" Feb 18 12:14:43 crc kubenswrapper[4717]: I0218 12:14:43.974929 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c"} err="failed to get container status \"d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c\": rpc error: code = NotFound desc = could not find container \"d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c\": container with ID starting with d3c7e17e26b883c5605ef249a5128068d293a459d2d5defa537500f4f1c74c8c not found: ID does not exist" Feb 18 12:14:45 crc kubenswrapper[4717]: I0218 12:14:45.050980 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" path="/var/lib/kubelet/pods/a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301/volumes" Feb 18 12:14:47 crc kubenswrapper[4717]: I0218 12:14:47.350860 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:47 crc kubenswrapper[4717]: I0218 12:14:47.351320 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:47 crc kubenswrapper[4717]: I0218 12:14:47.403624 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:47 crc kubenswrapper[4717]: I0218 12:14:47.964333 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:48 crc kubenswrapper[4717]: I0218 12:14:48.582814 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jv5r"] Feb 18 12:14:49 crc kubenswrapper[4717]: I0218 12:14:49.933198 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8jv5r" podUID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerName="registry-server" containerID="cri-o://be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6" gracePeriod=2 Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.461542 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.566925 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xwfv\" (UniqueName: \"kubernetes.io/projected/909c6c49-fb82-4dba-8cae-4c6a256e5c76-kube-api-access-8xwfv\") pod \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.567030 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-utilities\") pod \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.567077 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-catalog-content\") pod \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\" (UID: \"909c6c49-fb82-4dba-8cae-4c6a256e5c76\") " Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.568222 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-utilities" (OuterVolumeSpecName: "utilities") pod "909c6c49-fb82-4dba-8cae-4c6a256e5c76" (UID: "909c6c49-fb82-4dba-8cae-4c6a256e5c76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.574560 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909c6c49-fb82-4dba-8cae-4c6a256e5c76-kube-api-access-8xwfv" (OuterVolumeSpecName: "kube-api-access-8xwfv") pod "909c6c49-fb82-4dba-8cae-4c6a256e5c76" (UID: "909c6c49-fb82-4dba-8cae-4c6a256e5c76"). InnerVolumeSpecName "kube-api-access-8xwfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.592965 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "909c6c49-fb82-4dba-8cae-4c6a256e5c76" (UID: "909c6c49-fb82-4dba-8cae-4c6a256e5c76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.669890 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xwfv\" (UniqueName: \"kubernetes.io/projected/909c6c49-fb82-4dba-8cae-4c6a256e5c76-kube-api-access-8xwfv\") on node \"crc\" DevicePath \"\"" Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.670361 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.670378 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/909c6c49-fb82-4dba-8cae-4c6a256e5c76-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.947959 4717 generic.go:334] "Generic (PLEG): container finished" podID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerID="be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6" exitCode=0 Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.948035 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jv5r" event={"ID":"909c6c49-fb82-4dba-8cae-4c6a256e5c76","Type":"ContainerDied","Data":"be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6"} Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.948100 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8jv5r" Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.948130 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8jv5r" event={"ID":"909c6c49-fb82-4dba-8cae-4c6a256e5c76","Type":"ContainerDied","Data":"1b6ae0ea4c9aa60de0990bf51734e75aec99fd31cf757c91efe4e09198d10998"} Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.948160 4717 scope.go:117] "RemoveContainer" containerID="be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6" Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.985773 4717 scope.go:117] "RemoveContainer" containerID="2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606" Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.986390 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jv5r"] Feb 18 12:14:50 crc kubenswrapper[4717]: I0218 12:14:50.997214 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8jv5r"] Feb 18 12:14:51 crc kubenswrapper[4717]: I0218 12:14:51.008146 4717 scope.go:117] "RemoveContainer" containerID="4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f" Feb 18 12:14:51 crc kubenswrapper[4717]: I0218 12:14:51.068208 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" path="/var/lib/kubelet/pods/909c6c49-fb82-4dba-8cae-4c6a256e5c76/volumes" Feb 18 12:14:51 crc kubenswrapper[4717]: I0218 12:14:51.087799 4717 scope.go:117] "RemoveContainer" containerID="be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6" Feb 18 12:14:51 crc kubenswrapper[4717]: E0218 12:14:51.098500 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6\": container with ID starting with be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6 not found: ID does not exist" containerID="be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6" Feb 18 12:14:51 crc kubenswrapper[4717]: I0218 12:14:51.098604 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6"} err="failed to get container status \"be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6\": rpc error: code = NotFound desc = could not find container \"be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6\": container with ID starting with be85a2716bc811ade9b0d0e90dc3ec968699b83872e84fcfb3d3a79cc26fcaf6 not found: ID does not exist" Feb 18 12:14:51 crc kubenswrapper[4717]: I0218 12:14:51.098642 4717 scope.go:117] "RemoveContainer" containerID="2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606" Feb 18 12:14:51 crc kubenswrapper[4717]: E0218 12:14:51.106859 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606\": container with ID starting with 2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606 not found: ID does not exist" containerID="2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606" Feb 18 12:14:51 crc kubenswrapper[4717]: I0218 12:14:51.106930 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606"} err="failed to get container status \"2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606\": rpc error: code = NotFound desc = could not find container \"2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606\": container with ID starting with 2ab4599a1108ab45f7548f0e6d0c0fd913a936832512e6bfd62b47c5321ab606 not found: ID does not exist" Feb 18 12:14:51 crc kubenswrapper[4717]: I0218 12:14:51.106968 4717 scope.go:117] "RemoveContainer" containerID="4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f" Feb 18 12:14:51 crc kubenswrapper[4717]: E0218 12:14:51.107578 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f\": container with ID starting with 4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f not found: ID does not exist" containerID="4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f" Feb 18 12:14:51 crc kubenswrapper[4717]: I0218 12:14:51.107604 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f"} err="failed to get container status \"4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f\": rpc error: code = NotFound desc = could not find container \"4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f\": container with ID starting with 4ac6998ee086d9e7c3091f2e457c0e83e75d5b376c23a48d433ab510bce8794f not found: ID does not exist" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.157795 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl"] Feb 18 12:15:00 crc kubenswrapper[4717]: E0218 12:15:00.158996 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerName="extract-utilities" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.159015 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerName="extract-utilities" Feb 18 12:15:00 crc kubenswrapper[4717]: E0218 12:15:00.159037 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerName="extract-content" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.159044 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerName="extract-content" Feb 18 12:15:00 crc kubenswrapper[4717]: E0218 12:15:00.159063 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerName="registry-server" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.159070 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerName="registry-server" Feb 18 12:15:00 crc kubenswrapper[4717]: E0218 12:15:00.159095 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerName="registry-server" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.159103 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerName="registry-server" Feb 18 12:15:00 crc kubenswrapper[4717]: E0218 12:15:00.159124 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerName="extract-utilities" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.159131 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerName="extract-utilities" Feb 18 12:15:00 crc kubenswrapper[4717]: E0218 12:15:00.159150 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerName="extract-content" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.159160 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerName="extract-content" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.159498 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f73ad3-b6a0-404a-a2cf-cc4b3e1ef301" containerName="registry-server" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.159524 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="909c6c49-fb82-4dba-8cae-4c6a256e5c76" containerName="registry-server" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.160358 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.163639 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.164364 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.168733 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl"] Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.303453 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb39d3e5-4f78-4359-9a18-f9241be6a618-secret-volume\") pod \"collect-profiles-29523615-ng2xl\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.303655 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v42gs\" (UniqueName: \"kubernetes.io/projected/fb39d3e5-4f78-4359-9a18-f9241be6a618-kube-api-access-v42gs\") pod \"collect-profiles-29523615-ng2xl\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.303739 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb39d3e5-4f78-4359-9a18-f9241be6a618-config-volume\") pod \"collect-profiles-29523615-ng2xl\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.406525 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb39d3e5-4f78-4359-9a18-f9241be6a618-secret-volume\") pod \"collect-profiles-29523615-ng2xl\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.406689 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v42gs\" (UniqueName: \"kubernetes.io/projected/fb39d3e5-4f78-4359-9a18-f9241be6a618-kube-api-access-v42gs\") pod \"collect-profiles-29523615-ng2xl\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.406754 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb39d3e5-4f78-4359-9a18-f9241be6a618-config-volume\") pod \"collect-profiles-29523615-ng2xl\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.408108 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb39d3e5-4f78-4359-9a18-f9241be6a618-config-volume\") pod \"collect-profiles-29523615-ng2xl\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.416822 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb39d3e5-4f78-4359-9a18-f9241be6a618-secret-volume\") pod \"collect-profiles-29523615-ng2xl\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.430096 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v42gs\" (UniqueName: \"kubernetes.io/projected/fb39d3e5-4f78-4359-9a18-f9241be6a618-kube-api-access-v42gs\") pod \"collect-profiles-29523615-ng2xl\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.489596 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:00 crc kubenswrapper[4717]: I0218 12:15:00.955252 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl"] Feb 18 12:15:01 crc kubenswrapper[4717]: I0218 12:15:01.086974 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" event={"ID":"fb39d3e5-4f78-4359-9a18-f9241be6a618","Type":"ContainerStarted","Data":"3ca338d9a62da90e741235d84b11c7db2a9284637b8dac3280f6e0c9ad9f5c2c"} Feb 18 12:15:02 crc kubenswrapper[4717]: I0218 12:15:02.099852 4717 generic.go:334] "Generic (PLEG): container finished" podID="fb39d3e5-4f78-4359-9a18-f9241be6a618" containerID="2739e11789e2ef369e13a55e3cf12ea83cb1b6146b001f36982e465a25d51f63" exitCode=0 Feb 18 12:15:02 crc kubenswrapper[4717]: I0218 12:15:02.099928 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" event={"ID":"fb39d3e5-4f78-4359-9a18-f9241be6a618","Type":"ContainerDied","Data":"2739e11789e2ef369e13a55e3cf12ea83cb1b6146b001f36982e465a25d51f63"} Feb 18 12:15:03 crc kubenswrapper[4717]: I0218 12:15:03.488369 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:03 crc kubenswrapper[4717]: I0218 12:15:03.581362 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v42gs\" (UniqueName: \"kubernetes.io/projected/fb39d3e5-4f78-4359-9a18-f9241be6a618-kube-api-access-v42gs\") pod \"fb39d3e5-4f78-4359-9a18-f9241be6a618\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " Feb 18 12:15:03 crc kubenswrapper[4717]: I0218 12:15:03.581809 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb39d3e5-4f78-4359-9a18-f9241be6a618-secret-volume\") pod \"fb39d3e5-4f78-4359-9a18-f9241be6a618\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " Feb 18 12:15:03 crc kubenswrapper[4717]: I0218 12:15:03.581861 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb39d3e5-4f78-4359-9a18-f9241be6a618-config-volume\") pod \"fb39d3e5-4f78-4359-9a18-f9241be6a618\" (UID: \"fb39d3e5-4f78-4359-9a18-f9241be6a618\") " Feb 18 12:15:03 crc kubenswrapper[4717]: I0218 12:15:03.583112 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb39d3e5-4f78-4359-9a18-f9241be6a618-config-volume" (OuterVolumeSpecName: "config-volume") pod "fb39d3e5-4f78-4359-9a18-f9241be6a618" (UID: "fb39d3e5-4f78-4359-9a18-f9241be6a618"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:15:03 crc kubenswrapper[4717]: I0218 12:15:03.592037 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb39d3e5-4f78-4359-9a18-f9241be6a618-kube-api-access-v42gs" (OuterVolumeSpecName: "kube-api-access-v42gs") pod "fb39d3e5-4f78-4359-9a18-f9241be6a618" (UID: "fb39d3e5-4f78-4359-9a18-f9241be6a618"). InnerVolumeSpecName "kube-api-access-v42gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:15:03 crc kubenswrapper[4717]: I0218 12:15:03.592124 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb39d3e5-4f78-4359-9a18-f9241be6a618-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fb39d3e5-4f78-4359-9a18-f9241be6a618" (UID: "fb39d3e5-4f78-4359-9a18-f9241be6a618"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:15:03 crc kubenswrapper[4717]: I0218 12:15:03.685297 4717 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb39d3e5-4f78-4359-9a18-f9241be6a618-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:15:03 crc kubenswrapper[4717]: I0218 12:15:03.685358 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v42gs\" (UniqueName: \"kubernetes.io/projected/fb39d3e5-4f78-4359-9a18-f9241be6a618-kube-api-access-v42gs\") on node \"crc\" DevicePath \"\"" Feb 18 12:15:03 crc kubenswrapper[4717]: I0218 12:15:03.685382 4717 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb39d3e5-4f78-4359-9a18-f9241be6a618-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:15:04 crc kubenswrapper[4717]: I0218 12:15:04.126580 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" event={"ID":"fb39d3e5-4f78-4359-9a18-f9241be6a618","Type":"ContainerDied","Data":"3ca338d9a62da90e741235d84b11c7db2a9284637b8dac3280f6e0c9ad9f5c2c"} Feb 18 12:15:04 crc kubenswrapper[4717]: I0218 12:15:04.127248 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ca338d9a62da90e741235d84b11c7db2a9284637b8dac3280f6e0c9ad9f5c2c" Feb 18 12:15:04 crc kubenswrapper[4717]: I0218 12:15:04.127153 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.041301 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4qr85"] Feb 18 12:15:38 crc kubenswrapper[4717]: E0218 12:15:38.042502 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb39d3e5-4f78-4359-9a18-f9241be6a618" containerName="collect-profiles" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.042523 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb39d3e5-4f78-4359-9a18-f9241be6a618" containerName="collect-profiles" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.042753 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb39d3e5-4f78-4359-9a18-f9241be6a618" containerName="collect-profiles" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.044742 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.054408 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qr85"] Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.131075 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-catalog-content\") pod \"redhat-operators-4qr85\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.131748 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxvvv\" (UniqueName: \"kubernetes.io/projected/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-kube-api-access-sxvvv\") pod \"redhat-operators-4qr85\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.131818 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-utilities\") pod \"redhat-operators-4qr85\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.233215 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxvvv\" (UniqueName: \"kubernetes.io/projected/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-kube-api-access-sxvvv\") pod \"redhat-operators-4qr85\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.233339 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-utilities\") pod \"redhat-operators-4qr85\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.233428 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-catalog-content\") pod \"redhat-operators-4qr85\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.234212 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-utilities\") pod \"redhat-operators-4qr85\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.234253 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-catalog-content\") pod \"redhat-operators-4qr85\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.260556 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxvvv\" (UniqueName: \"kubernetes.io/projected/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-kube-api-access-sxvvv\") pod \"redhat-operators-4qr85\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.381035 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.445732 4717 scope.go:117] "RemoveContainer" containerID="e09bd7107cce0fe76c7d46da5a5401e326581c84a170a2d7ae1853e0421504ea" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.527197 4717 scope.go:117] "RemoveContainer" containerID="3c0dde205d79d0820768347d4319c86de00aa69bd37bb496f38a746e1674e834" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.557610 4717 scope.go:117] "RemoveContainer" containerID="eb98497e73e4f459ad425bd8b4cdac42ed9913b5b6c583d4bbfccbc291b902c3" Feb 18 12:15:38 crc kubenswrapper[4717]: I0218 12:15:38.889618 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qr85"] Feb 18 12:15:39 crc kubenswrapper[4717]: I0218 12:15:39.515636 4717 generic.go:334] "Generic (PLEG): container finished" podID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerID="d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e" exitCode=0 Feb 18 12:15:39 crc kubenswrapper[4717]: I0218 12:15:39.515868 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qr85" event={"ID":"f49e8658-a665-4f18-b28e-e7b4e8dff1a4","Type":"ContainerDied","Data":"d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e"} Feb 18 12:15:39 crc kubenswrapper[4717]: I0218 12:15:39.516195 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qr85" event={"ID":"f49e8658-a665-4f18-b28e-e7b4e8dff1a4","Type":"ContainerStarted","Data":"27a06bf584f021e7905ad6fef23e1f7fe616bee3b7a46cc8ba88d716a83b8cd3"} Feb 18 12:15:39 crc kubenswrapper[4717]: I0218 12:15:39.523215 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:15:40 crc kubenswrapper[4717]: I0218 12:15:40.547163 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qr85" event={"ID":"f49e8658-a665-4f18-b28e-e7b4e8dff1a4","Type":"ContainerStarted","Data":"4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef"} Feb 18 12:15:43 crc kubenswrapper[4717]: I0218 12:15:43.587831 4717 generic.go:334] "Generic (PLEG): container finished" podID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerID="4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef" exitCode=0 Feb 18 12:15:43 crc kubenswrapper[4717]: I0218 12:15:43.588314 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qr85" event={"ID":"f49e8658-a665-4f18-b28e-e7b4e8dff1a4","Type":"ContainerDied","Data":"4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef"} Feb 18 12:15:44 crc kubenswrapper[4717]: I0218 12:15:44.600346 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qr85" event={"ID":"f49e8658-a665-4f18-b28e-e7b4e8dff1a4","Type":"ContainerStarted","Data":"354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd"} Feb 18 12:15:44 crc kubenswrapper[4717]: I0218 12:15:44.636175 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4qr85" podStartSLOduration=2.159112599 podStartE2EDuration="6.636134653s" podCreationTimestamp="2026-02-18 12:15:38 +0000 UTC" firstStartedPulling="2026-02-18 12:15:39.522982087 +0000 UTC m=+1573.925083403" lastFinishedPulling="2026-02-18 12:15:44.000004111 +0000 UTC m=+1578.402105457" observedRunningTime="2026-02-18 12:15:44.624772797 +0000 UTC m=+1579.026874113" watchObservedRunningTime="2026-02-18 12:15:44.636134653 +0000 UTC m=+1579.038235969" Feb 18 12:15:48 crc kubenswrapper[4717]: I0218 12:15:48.381809 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:48 crc kubenswrapper[4717]: I0218 12:15:48.383474 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:15:49 crc kubenswrapper[4717]: I0218 12:15:49.434526 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4qr85" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerName="registry-server" probeResult="failure" output=< Feb 18 12:15:49 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 12:15:49 crc kubenswrapper[4717]: > Feb 18 12:15:55 crc kubenswrapper[4717]: I0218 12:15:55.061983 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4rv27"] Feb 18 12:15:55 crc kubenswrapper[4717]: I0218 12:15:55.062973 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6995-account-create-update-gv7nt"] Feb 18 12:15:55 crc kubenswrapper[4717]: I0218 12:15:55.078545 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4rv27"] Feb 18 12:15:55 crc kubenswrapper[4717]: I0218 12:15:55.097391 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6995-account-create-update-gv7nt"] Feb 18 12:15:55 crc kubenswrapper[4717]: I0218 12:15:55.717075 4717 generic.go:334] "Generic (PLEG): container finished" podID="ca940f40-1894-4b6a-bb57-acac20cd47f3" containerID="82f84be545ddeccf6b6bb58ba8b7314e751a1b9ccf7ca360373f8813c19b0c08" exitCode=0 Feb 18 12:15:55 crc kubenswrapper[4717]: I0218 12:15:55.717164 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" event={"ID":"ca940f40-1894-4b6a-bb57-acac20cd47f3","Type":"ContainerDied","Data":"82f84be545ddeccf6b6bb58ba8b7314e751a1b9ccf7ca360373f8813c19b0c08"} Feb 18 12:15:56 crc kubenswrapper[4717]: I0218 12:15:56.043089 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e8be-account-create-update-5wd2v"] Feb 18 12:15:56 crc kubenswrapper[4717]: I0218 12:15:56.055969 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-gfpjb"] Feb 18 12:15:56 crc kubenswrapper[4717]: I0218 12:15:56.073034 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wlldn"] Feb 18 12:15:56 crc kubenswrapper[4717]: I0218 12:15:56.110959 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-af1f-account-create-update-d8v98"] Feb 18 12:15:56 crc kubenswrapper[4717]: I0218 12:15:56.123291 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-af1f-account-create-update-d8v98"] Feb 18 12:15:56 crc kubenswrapper[4717]: I0218 12:15:56.134549 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wlldn"] Feb 18 12:15:56 crc kubenswrapper[4717]: I0218 12:15:56.145856 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-gfpjb"] Feb 18 12:15:56 crc kubenswrapper[4717]: I0218 12:15:56.155024 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e8be-account-create-update-5wd2v"] Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.053822 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14958cee-73ca-44f3-a12e-b505644a4429" path="/var/lib/kubelet/pods/14958cee-73ca-44f3-a12e-b505644a4429/volumes" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.054875 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de5a667-32ab-4232-b42c-071d6a4347f9" path="/var/lib/kubelet/pods/5de5a667-32ab-4232-b42c-071d6a4347f9/volumes" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.055566 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10cc0ac-d3a2-4723-bbb3-772cc5327d2e" path="/var/lib/kubelet/pods/b10cc0ac-d3a2-4723-bbb3-772cc5327d2e/volumes" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.056285 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7550ba5-c203-4016-90a7-4273e4a8688a" path="/var/lib/kubelet/pods/c7550ba5-c203-4016-90a7-4273e4a8688a/volumes" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.057524 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61481b3-51b8-4663-9dbd-c2abb65388df" path="/var/lib/kubelet/pods/e61481b3-51b8-4663-9dbd-c2abb65388df/volumes" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.058146 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe423299-da9c-4578-90bc-9c6e13b7acf6" path="/var/lib/kubelet/pods/fe423299-da9c-4578-90bc-9c6e13b7acf6/volumes" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.280706 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.367854 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-ssh-key-openstack-edpm-ipam\") pod \"ca940f40-1894-4b6a-bb57-acac20cd47f3\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.368075 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-bootstrap-combined-ca-bundle\") pod \"ca940f40-1894-4b6a-bb57-acac20cd47f3\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.368140 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p96tl\" (UniqueName: \"kubernetes.io/projected/ca940f40-1894-4b6a-bb57-acac20cd47f3-kube-api-access-p96tl\") pod \"ca940f40-1894-4b6a-bb57-acac20cd47f3\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.368372 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-inventory\") pod \"ca940f40-1894-4b6a-bb57-acac20cd47f3\" (UID: \"ca940f40-1894-4b6a-bb57-acac20cd47f3\") " Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.376574 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ca940f40-1894-4b6a-bb57-acac20cd47f3" (UID: "ca940f40-1894-4b6a-bb57-acac20cd47f3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.378570 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca940f40-1894-4b6a-bb57-acac20cd47f3-kube-api-access-p96tl" (OuterVolumeSpecName: "kube-api-access-p96tl") pod "ca940f40-1894-4b6a-bb57-acac20cd47f3" (UID: "ca940f40-1894-4b6a-bb57-acac20cd47f3"). InnerVolumeSpecName "kube-api-access-p96tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.405595 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ca940f40-1894-4b6a-bb57-acac20cd47f3" (UID: "ca940f40-1894-4b6a-bb57-acac20cd47f3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.427288 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-inventory" (OuterVolumeSpecName: "inventory") pod "ca940f40-1894-4b6a-bb57-acac20cd47f3" (UID: "ca940f40-1894-4b6a-bb57-acac20cd47f3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.470888 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.470953 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.470968 4717 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca940f40-1894-4b6a-bb57-acac20cd47f3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.470980 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p96tl\" (UniqueName: \"kubernetes.io/projected/ca940f40-1894-4b6a-bb57-acac20cd47f3-kube-api-access-p96tl\") on node \"crc\" DevicePath \"\"" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.739479 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" event={"ID":"ca940f40-1894-4b6a-bb57-acac20cd47f3","Type":"ContainerDied","Data":"970c58bddfb135966b36946afe45ca991c13826662607632e2dd2677f30cd98f"} Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.739537 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="970c58bddfb135966b36946afe45ca991c13826662607632e2dd2677f30cd98f" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.739550 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg" Feb 18 12:15:57 crc kubenswrapper[4717]: E0218 12:15:57.838758 4717 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca940f40_1894_4b6a_bb57_acac20cd47f3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca940f40_1894_4b6a_bb57_acac20cd47f3.slice/crio-970c58bddfb135966b36946afe45ca991c13826662607632e2dd2677f30cd98f\": RecentStats: unable to find data in memory cache]" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.859062 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57"] Feb 18 12:15:57 crc kubenswrapper[4717]: E0218 12:15:57.860234 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca940f40-1894-4b6a-bb57-acac20cd47f3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.860284 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca940f40-1894-4b6a-bb57-acac20cd47f3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.860536 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca940f40-1894-4b6a-bb57-acac20cd47f3" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.861459 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.869857 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.870073 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.870805 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.871831 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57"] Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.878980 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.991869 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh6js\" (UniqueName: \"kubernetes.io/projected/57c4f818-2860-43d9-9c1b-f99b48449af0-kube-api-access-mh6js\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dfh57\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.992290 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dfh57\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:57 crc kubenswrapper[4717]: I0218 12:15:57.992412 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dfh57\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:58 crc kubenswrapper[4717]: I0218 12:15:58.094221 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dfh57\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:58 crc kubenswrapper[4717]: I0218 12:15:58.094406 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh6js\" (UniqueName: \"kubernetes.io/projected/57c4f818-2860-43d9-9c1b-f99b48449af0-kube-api-access-mh6js\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dfh57\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:58 crc kubenswrapper[4717]: I0218 12:15:58.094470 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dfh57\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:58 crc kubenswrapper[4717]: I0218 12:15:58.104098 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dfh57\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:58 crc kubenswrapper[4717]: I0218 12:15:58.109173 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dfh57\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:58 crc kubenswrapper[4717]: I0218 12:15:58.118590 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh6js\" (UniqueName: \"kubernetes.io/projected/57c4f818-2860-43d9-9c1b-f99b48449af0-kube-api-access-mh6js\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dfh57\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:58 crc kubenswrapper[4717]: I0218 12:15:58.184720 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:15:58 crc kubenswrapper[4717]: I0218 12:15:58.809656 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57"] Feb 18 12:15:59 crc kubenswrapper[4717]: I0218 12:15:59.446434 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4qr85" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerName="registry-server" probeResult="failure" output=< Feb 18 12:15:59 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 12:15:59 crc kubenswrapper[4717]: > Feb 18 12:15:59 crc kubenswrapper[4717]: I0218 12:15:59.777102 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" event={"ID":"57c4f818-2860-43d9-9c1b-f99b48449af0","Type":"ContainerStarted","Data":"22ceae2f5e4e134b47c04ef42abd48d26dc58cda2febe629ffb2bc08ca65806c"} Feb 18 12:15:59 crc kubenswrapper[4717]: I0218 12:15:59.777186 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" event={"ID":"57c4f818-2860-43d9-9c1b-f99b48449af0","Type":"ContainerStarted","Data":"c3e734451934b74d7a04a96810bf3f374b0c66c31bfeef4bda86c8a79df3baff"} Feb 18 12:15:59 crc kubenswrapper[4717]: I0218 12:15:59.809490 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" podStartSLOduration=2.309933325 podStartE2EDuration="2.809466307s" podCreationTimestamp="2026-02-18 12:15:57 +0000 UTC" firstStartedPulling="2026-02-18 12:15:58.816885391 +0000 UTC m=+1593.218986707" lastFinishedPulling="2026-02-18 12:15:59.316418383 +0000 UTC m=+1593.718519689" observedRunningTime="2026-02-18 12:15:59.799547771 +0000 UTC m=+1594.201649077" watchObservedRunningTime="2026-02-18 12:15:59.809466307 +0000 UTC m=+1594.211567623" Feb 18 12:16:02 crc kubenswrapper[4717]: I0218 12:16:02.035520 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wdpkl"] Feb 18 12:16:02 crc kubenswrapper[4717]: I0218 12:16:02.047061 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wdpkl"] Feb 18 12:16:03 crc kubenswrapper[4717]: I0218 12:16:03.054379 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6db735-5566-4452-adeb-88fa28f4f417" path="/var/lib/kubelet/pods/6d6db735-5566-4452-adeb-88fa28f4f417/volumes" Feb 18 12:16:08 crc kubenswrapper[4717]: I0218 12:16:08.435711 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:16:08 crc kubenswrapper[4717]: I0218 12:16:08.489357 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:16:09 crc kubenswrapper[4717]: I0218 12:16:09.239995 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qr85"] Feb 18 12:16:09 crc kubenswrapper[4717]: I0218 12:16:09.883069 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4qr85" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerName="registry-server" containerID="cri-o://354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd" gracePeriod=2 Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.405074 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.482047 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxvvv\" (UniqueName: \"kubernetes.io/projected/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-kube-api-access-sxvvv\") pod \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.482920 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-utilities\") pod \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.483090 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-catalog-content\") pod \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\" (UID: \"f49e8658-a665-4f18-b28e-e7b4e8dff1a4\") " Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.484443 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-utilities" (OuterVolumeSpecName: "utilities") pod "f49e8658-a665-4f18-b28e-e7b4e8dff1a4" (UID: "f49e8658-a665-4f18-b28e-e7b4e8dff1a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.493703 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-kube-api-access-sxvvv" (OuterVolumeSpecName: "kube-api-access-sxvvv") pod "f49e8658-a665-4f18-b28e-e7b4e8dff1a4" (UID: "f49e8658-a665-4f18-b28e-e7b4e8dff1a4"). InnerVolumeSpecName "kube-api-access-sxvvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.585978 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxvvv\" (UniqueName: \"kubernetes.io/projected/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-kube-api-access-sxvvv\") on node \"crc\" DevicePath \"\"" Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.586025 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.609671 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f49e8658-a665-4f18-b28e-e7b4e8dff1a4" (UID: "f49e8658-a665-4f18-b28e-e7b4e8dff1a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.688223 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f49e8658-a665-4f18-b28e-e7b4e8dff1a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.897487 4717 generic.go:334] "Generic (PLEG): container finished" podID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerID="354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd" exitCode=0 Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.897549 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qr85" event={"ID":"f49e8658-a665-4f18-b28e-e7b4e8dff1a4","Type":"ContainerDied","Data":"354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd"} Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.897558 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qr85" Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.897595 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qr85" event={"ID":"f49e8658-a665-4f18-b28e-e7b4e8dff1a4","Type":"ContainerDied","Data":"27a06bf584f021e7905ad6fef23e1f7fe616bee3b7a46cc8ba88d716a83b8cd3"} Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.897622 4717 scope.go:117] "RemoveContainer" containerID="354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd" Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.936062 4717 scope.go:117] "RemoveContainer" containerID="4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef" Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.938706 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qr85"] Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.950547 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4qr85"] Feb 18 12:16:10 crc kubenswrapper[4717]: I0218 12:16:10.981548 4717 scope.go:117] "RemoveContainer" containerID="d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e" Feb 18 12:16:11 crc kubenswrapper[4717]: I0218 12:16:11.041442 4717 scope.go:117] "RemoveContainer" containerID="354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd" Feb 18 12:16:11 crc kubenswrapper[4717]: E0218 12:16:11.042911 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd\": container with ID starting with 354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd not found: ID does not exist" containerID="354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd" Feb 18 12:16:11 crc kubenswrapper[4717]: I0218 12:16:11.042974 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd"} err="failed to get container status \"354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd\": rpc error: code = NotFound desc = could not find container \"354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd\": container with ID starting with 354ceb456de225520c4443d70688b1434a6d16c980cf27806e2b1f7e622804dd not found: ID does not exist" Feb 18 12:16:11 crc kubenswrapper[4717]: I0218 12:16:11.043006 4717 scope.go:117] "RemoveContainer" containerID="4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef" Feb 18 12:16:11 crc kubenswrapper[4717]: E0218 12:16:11.043592 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef\": container with ID starting with 4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef not found: ID does not exist" containerID="4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef" Feb 18 12:16:11 crc kubenswrapper[4717]: I0218 12:16:11.043625 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef"} err="failed to get container status \"4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef\": rpc error: code = NotFound desc = could not find container \"4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef\": container with ID starting with 4c5f9c4deb289e6803023f3531c1a22e0ddd79dec823d9a959d451be4cc272ef not found: ID does not exist" Feb 18 12:16:11 crc kubenswrapper[4717]: I0218 12:16:11.043674 4717 scope.go:117] "RemoveContainer" containerID="d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e" Feb 18 12:16:11 crc kubenswrapper[4717]: E0218 12:16:11.043961 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e\": container with ID starting with d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e not found: ID does not exist" containerID="d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e" Feb 18 12:16:11 crc kubenswrapper[4717]: I0218 12:16:11.043986 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e"} err="failed to get container status \"d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e\": rpc error: code = NotFound desc = could not find container \"d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e\": container with ID starting with d4569fea1b050bdc3fde3016d8d412bcb5e0529937e10e6260928d627cc1300e not found: ID does not exist" Feb 18 12:16:11 crc kubenswrapper[4717]: I0218 12:16:11.051394 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" path="/var/lib/kubelet/pods/f49e8658-a665-4f18-b28e-e7b4e8dff1a4/volumes" Feb 18 12:16:25 crc kubenswrapper[4717]: I0218 12:16:25.051092 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-h7p5l"] Feb 18 12:16:25 crc kubenswrapper[4717]: I0218 12:16:25.061380 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-h7p5l"] Feb 18 12:16:27 crc kubenswrapper[4717]: I0218 12:16:27.052130 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8431e64c-4bcb-4dce-a7bb-123b54445b08" path="/var/lib/kubelet/pods/8431e64c-4bcb-4dce-a7bb-123b54445b08/volumes" Feb 18 12:16:33 crc kubenswrapper[4717]: I0218 12:16:33.055001 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-cgb2w"] Feb 18 12:16:33 crc kubenswrapper[4717]: I0218 12:16:33.055895 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-cgb2w"] Feb 18 12:16:34 crc kubenswrapper[4717]: I0218 12:16:34.036910 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wbnxr"] Feb 18 12:16:34 crc kubenswrapper[4717]: I0218 12:16:34.049999 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2c96-account-create-update-9pnt2"] Feb 18 12:16:34 crc kubenswrapper[4717]: I0218 12:16:34.063732 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8kcn6"] Feb 18 12:16:34 crc kubenswrapper[4717]: I0218 12:16:34.100576 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wbnxr"] Feb 18 12:16:34 crc kubenswrapper[4717]: I0218 12:16:34.113517 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2c96-account-create-update-9pnt2"] Feb 18 12:16:34 crc kubenswrapper[4717]: I0218 12:16:34.125343 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8kcn6"] Feb 18 12:16:35 crc kubenswrapper[4717]: I0218 12:16:35.048597 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f9b780-b9ee-4fa6-bf61-c2b5ec92a314" path="/var/lib/kubelet/pods/15f9b780-b9ee-4fa6-bf61-c2b5ec92a314/volumes" Feb 18 12:16:35 crc kubenswrapper[4717]: I0218 12:16:35.049238 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b10c7d3-ae5b-4a8a-bba0-a696225d8879" path="/var/lib/kubelet/pods/8b10c7d3-ae5b-4a8a-bba0-a696225d8879/volumes" Feb 18 12:16:35 crc kubenswrapper[4717]: I0218 12:16:35.049929 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3dee21-58fd-4508-9549-30ef7d1e145b" path="/var/lib/kubelet/pods/dc3dee21-58fd-4508-9549-30ef7d1e145b/volumes" Feb 18 12:16:35 crc kubenswrapper[4717]: I0218 12:16:35.050683 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e432a478-a2c5-40bf-adc3-306647151c92" path="/var/lib/kubelet/pods/e432a478-a2c5-40bf-adc3-306647151c92/volumes" Feb 18 12:16:37 crc kubenswrapper[4717]: I0218 12:16:37.053462 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9f34-account-create-update-rlr2f"] Feb 18 12:16:37 crc kubenswrapper[4717]: I0218 12:16:37.054411 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-09da-account-create-update-wzt88"] Feb 18 12:16:37 crc kubenswrapper[4717]: I0218 12:16:37.064224 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-09da-account-create-update-wzt88"] Feb 18 12:16:37 crc kubenswrapper[4717]: I0218 12:16:37.073550 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9f34-account-create-update-rlr2f"] Feb 18 12:16:38 crc kubenswrapper[4717]: I0218 12:16:38.695311 4717 scope.go:117] "RemoveContainer" containerID="945977b1eedcede8e7d09645421061525e3a3cf47ee24b39e1c2768a21281380" Feb 18 12:16:38 crc kubenswrapper[4717]: I0218 12:16:38.725049 4717 scope.go:117] "RemoveContainer" containerID="7c7d0e94f0915655531213795b3f4b9361142c6608e4ce263861d7911504a919" Feb 18 12:16:38 crc kubenswrapper[4717]: I0218 12:16:38.792083 4717 scope.go:117] "RemoveContainer" containerID="ef6cf167fb2f456d8de1104da4cdd2fef604b01687499a166c42ad4a1a0e5593" Feb 18 12:16:38 crc kubenswrapper[4717]: I0218 12:16:38.848702 4717 scope.go:117] "RemoveContainer" containerID="f1259448d70e33739c061c221dfda715e99555b5eb48e3ccd923f3bd07018947" Feb 18 12:16:38 crc kubenswrapper[4717]: I0218 12:16:38.910714 4717 scope.go:117] "RemoveContainer" containerID="1c9a726739db9e4d6bba234f379808e8e67d3ff8f1e31c822a7ec7bc908235b6" Feb 18 12:16:38 crc kubenswrapper[4717]: I0218 12:16:38.949739 4717 scope.go:117] "RemoveContainer" containerID="dc96ce52a90690e2c3c36c5ef50245f2d11b62c73b9e59b4cb5ed32a99ffa2ad" Feb 18 12:16:38 crc kubenswrapper[4717]: I0218 12:16:38.998883 4717 scope.go:117] "RemoveContainer" containerID="5e344eba368bc120816d5b921d30ee61f3bc118a59b00e452ec03584034ecc80" Feb 18 12:16:39 crc kubenswrapper[4717]: I0218 12:16:39.024588 4717 scope.go:117] "RemoveContainer" containerID="92af7a2e788f5ebded441346b2f5ca1d8adecc8c4d19d95e27509c0d2c3e001c" Feb 18 12:16:39 crc kubenswrapper[4717]: I0218 12:16:39.047485 4717 scope.go:117] "RemoveContainer" containerID="09a8c65290545784997245bdc5cec4e001ff5c4cf0c72928463bf354a034525d" Feb 18 12:16:39 crc kubenswrapper[4717]: I0218 12:16:39.054331 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf6f923-4c7b-4ba8-8545-1fb97ad5551b" path="/var/lib/kubelet/pods/1bf6f923-4c7b-4ba8-8545-1fb97ad5551b/volumes" Feb 18 12:16:39 crc kubenswrapper[4717]: I0218 12:16:39.055092 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd279242-4b9b-4172-a147-f69d3d74117b" path="/var/lib/kubelet/pods/dd279242-4b9b-4172-a147-f69d3d74117b/volumes" Feb 18 12:16:39 crc kubenswrapper[4717]: I0218 12:16:39.091010 4717 scope.go:117] "RemoveContainer" containerID="aaa840f431bc69c452df70f611906d35f757eacbd3c1a0064370de3f7de17158" Feb 18 12:16:39 crc kubenswrapper[4717]: I0218 12:16:39.114106 4717 scope.go:117] "RemoveContainer" containerID="7232abb79b1dbe6b75784cf0943b334996fe7dfd0fefc377baa662a9954d518a" Feb 18 12:16:39 crc kubenswrapper[4717]: I0218 12:16:39.142634 4717 scope.go:117] "RemoveContainer" containerID="135bf50b3e5d96b318dfaa92d07cc1dcdcca43ed306a5e248818cc6cd639b069" Feb 18 12:16:39 crc kubenswrapper[4717]: I0218 12:16:39.170106 4717 scope.go:117] "RemoveContainer" containerID="208f42341cb01cadd43ec8568207272847a6240d7fafb715ea6eb7d0ac51f16f" Feb 18 12:16:39 crc kubenswrapper[4717]: I0218 12:16:39.193366 4717 scope.go:117] "RemoveContainer" containerID="6376b2852ffa9a146954f41dfedd329fe54eda6e28a1dc1fe40261d14071c0c2" Feb 18 12:16:42 crc kubenswrapper[4717]: I0218 12:16:42.773881 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:16:42 crc kubenswrapper[4717]: I0218 12:16:42.774714 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:16:43 crc kubenswrapper[4717]: I0218 12:16:43.049301 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-dz2k8"] Feb 18 12:16:43 crc kubenswrapper[4717]: I0218 12:16:43.051939 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-dz2k8"] Feb 18 12:16:45 crc kubenswrapper[4717]: I0218 12:16:45.048627 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="546ddaab-7675-4c53-b86d-7f017d828784" path="/var/lib/kubelet/pods/546ddaab-7675-4c53-b86d-7f017d828784/volumes" Feb 18 12:17:12 crc kubenswrapper[4717]: I0218 12:17:12.772952 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:17:12 crc kubenswrapper[4717]: I0218 12:17:12.773784 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:17:39 crc kubenswrapper[4717]: I0218 12:17:39.499079 4717 scope.go:117] "RemoveContainer" containerID="ccc4d36e6632542ff1eed8016b104d40419a04f9867edf7eed407ed41e49996c" Feb 18 12:17:39 crc kubenswrapper[4717]: I0218 12:17:39.533516 4717 scope.go:117] "RemoveContainer" containerID="00e95ce67a29c6b125fd07158f0744e63668e4aa36170ff6825e76022bf4d4ca" Feb 18 12:17:39 crc kubenswrapper[4717]: I0218 12:17:39.590100 4717 scope.go:117] "RemoveContainer" containerID="9c081280555879ab4ac0d4fe8485f00a2df10b33f5f4a014358ab02b130859cd" Feb 18 12:17:40 crc kubenswrapper[4717]: I0218 12:17:40.097347 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wbtkn"] Feb 18 12:17:40 crc kubenswrapper[4717]: I0218 12:17:40.133174 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mlr26"] Feb 18 12:17:40 crc kubenswrapper[4717]: I0218 12:17:40.167691 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wbtkn"] Feb 18 12:17:40 crc kubenswrapper[4717]: I0218 12:17:40.185368 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4q9xn"] Feb 18 12:17:40 crc kubenswrapper[4717]: I0218 12:17:40.217363 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mlr26"] Feb 18 12:17:40 crc kubenswrapper[4717]: I0218 12:17:40.245216 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4q9xn"] Feb 18 12:17:41 crc kubenswrapper[4717]: I0218 12:17:41.052236 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608ad4dc-1e71-408a-9aea-015949cf9aff" path="/var/lib/kubelet/pods/608ad4dc-1e71-408a-9aea-015949cf9aff/volumes" Feb 18 12:17:41 crc kubenswrapper[4717]: I0218 12:17:41.053281 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0bc5b3-673e-46dc-941a-151096e1831b" path="/var/lib/kubelet/pods/cc0bc5b3-673e-46dc-941a-151096e1831b/volumes" Feb 18 12:17:41 crc kubenswrapper[4717]: I0218 12:17:41.053868 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2729c19-de90-453d-b744-10b50c11a28b" path="/var/lib/kubelet/pods/e2729c19-de90-453d-b744-10b50c11a28b/volumes" Feb 18 12:17:42 crc kubenswrapper[4717]: I0218 12:17:42.772978 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:17:42 crc kubenswrapper[4717]: I0218 12:17:42.773468 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:17:42 crc kubenswrapper[4717]: I0218 12:17:42.773534 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:17:42 crc kubenswrapper[4717]: I0218 12:17:42.774430 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:17:42 crc kubenswrapper[4717]: I0218 12:17:42.774489 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" gracePeriod=600 Feb 18 12:17:42 crc kubenswrapper[4717]: E0218 12:17:42.895643 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:17:42 crc kubenswrapper[4717]: I0218 12:17:42.915366 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" exitCode=0 Feb 18 12:17:42 crc kubenswrapper[4717]: I0218 12:17:42.915455 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c"} Feb 18 12:17:42 crc kubenswrapper[4717]: I0218 12:17:42.915504 4717 scope.go:117] "RemoveContainer" containerID="8029dd99738cdaa1b1753862d62d575af02645633ab25bd8d0586f5ebdffe632" Feb 18 12:17:42 crc kubenswrapper[4717]: I0218 12:17:42.916532 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:17:42 crc kubenswrapper[4717]: E0218 12:17:42.916918 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:17:47 crc kubenswrapper[4717]: I0218 12:17:47.969098 4717 generic.go:334] "Generic (PLEG): container finished" podID="57c4f818-2860-43d9-9c1b-f99b48449af0" containerID="22ceae2f5e4e134b47c04ef42abd48d26dc58cda2febe629ffb2bc08ca65806c" exitCode=0 Feb 18 12:17:47 crc kubenswrapper[4717]: I0218 12:17:47.969842 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" event={"ID":"57c4f818-2860-43d9-9c1b-f99b48449af0","Type":"ContainerDied","Data":"22ceae2f5e4e134b47c04ef42abd48d26dc58cda2febe629ffb2bc08ca65806c"} Feb 18 12:17:48 crc kubenswrapper[4717]: I0218 12:17:48.033940 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-77mb9"] Feb 18 12:17:48 crc kubenswrapper[4717]: I0218 12:17:48.049950 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-77mb9"] Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.052767 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7893e53-3622-4455-8eb8-459235541b6a" path="/var/lib/kubelet/pods/c7893e53-3622-4455-8eb8-459235541b6a/volumes" Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.437578 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.546298 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-ssh-key-openstack-edpm-ipam\") pod \"57c4f818-2860-43d9-9c1b-f99b48449af0\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.546659 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-inventory\") pod \"57c4f818-2860-43d9-9c1b-f99b48449af0\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.546706 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh6js\" (UniqueName: \"kubernetes.io/projected/57c4f818-2860-43d9-9c1b-f99b48449af0-kube-api-access-mh6js\") pod \"57c4f818-2860-43d9-9c1b-f99b48449af0\" (UID: \"57c4f818-2860-43d9-9c1b-f99b48449af0\") " Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.553651 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c4f818-2860-43d9-9c1b-f99b48449af0-kube-api-access-mh6js" (OuterVolumeSpecName: "kube-api-access-mh6js") pod "57c4f818-2860-43d9-9c1b-f99b48449af0" (UID: "57c4f818-2860-43d9-9c1b-f99b48449af0"). InnerVolumeSpecName "kube-api-access-mh6js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.581104 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-inventory" (OuterVolumeSpecName: "inventory") pod "57c4f818-2860-43d9-9c1b-f99b48449af0" (UID: "57c4f818-2860-43d9-9c1b-f99b48449af0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.591098 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "57c4f818-2860-43d9-9c1b-f99b48449af0" (UID: "57c4f818-2860-43d9-9c1b-f99b48449af0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.648940 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.648980 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh6js\" (UniqueName: \"kubernetes.io/projected/57c4f818-2860-43d9-9c1b-f99b48449af0-kube-api-access-mh6js\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.648992 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57c4f818-2860-43d9-9c1b-f99b48449af0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.991688 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" event={"ID":"57c4f818-2860-43d9-9c1b-f99b48449af0","Type":"ContainerDied","Data":"c3e734451934b74d7a04a96810bf3f374b0c66c31bfeef4bda86c8a79df3baff"} Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.991748 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3e734451934b74d7a04a96810bf3f374b0c66c31bfeef4bda86c8a79df3baff" Feb 18 12:17:49 crc kubenswrapper[4717]: I0218 12:17:49.991772 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dfh57" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.088562 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk"] Feb 18 12:17:50 crc kubenswrapper[4717]: E0218 12:17:50.089087 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerName="registry-server" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.089103 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerName="registry-server" Feb 18 12:17:50 crc kubenswrapper[4717]: E0218 12:17:50.089137 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerName="extract-utilities" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.089159 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerName="extract-utilities" Feb 18 12:17:50 crc kubenswrapper[4717]: E0218 12:17:50.089172 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerName="extract-content" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.089179 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerName="extract-content" Feb 18 12:17:50 crc kubenswrapper[4717]: E0218 12:17:50.089189 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c4f818-2860-43d9-9c1b-f99b48449af0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.089199 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c4f818-2860-43d9-9c1b-f99b48449af0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.089442 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="f49e8658-a665-4f18-b28e-e7b4e8dff1a4" containerName="registry-server" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.089475 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c4f818-2860-43d9-9c1b-f99b48449af0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.093245 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.097195 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.097470 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.097635 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.097816 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.106959 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk"] Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.159805 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t79d7\" (UniqueName: \"kubernetes.io/projected/16dc15ac-c4c3-4d90-8fd2-20054f92b894-kube-api-access-t79d7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.160006 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.160078 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.262317 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.262424 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.262518 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t79d7\" (UniqueName: \"kubernetes.io/projected/16dc15ac-c4c3-4d90-8fd2-20054f92b894-kube-api-access-t79d7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.267344 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.272045 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.280242 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t79d7\" (UniqueName: \"kubernetes.io/projected/16dc15ac-c4c3-4d90-8fd2-20054f92b894-kube-api-access-t79d7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:50 crc kubenswrapper[4717]: I0218 12:17:50.419334 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:17:51 crc kubenswrapper[4717]: I0218 12:17:51.027763 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk"] Feb 18 12:17:52 crc kubenswrapper[4717]: I0218 12:17:52.012529 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" event={"ID":"16dc15ac-c4c3-4d90-8fd2-20054f92b894","Type":"ContainerStarted","Data":"4bfcbb5957001b8de25a8a87024826f05d3a819eb368ff3e137f1c2f3170e32e"} Feb 18 12:17:54 crc kubenswrapper[4717]: I0218 12:17:54.042129 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" event={"ID":"16dc15ac-c4c3-4d90-8fd2-20054f92b894","Type":"ContainerStarted","Data":"1acd865a29898262c7fc13630180a71c4c0939f912cbafc4153440d79a5fc288"} Feb 18 12:17:54 crc kubenswrapper[4717]: I0218 12:17:54.063519 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" podStartSLOduration=2.17883559 podStartE2EDuration="4.063493002s" podCreationTimestamp="2026-02-18 12:17:50 +0000 UTC" firstStartedPulling="2026-02-18 12:17:51.037777153 +0000 UTC m=+1705.439878459" lastFinishedPulling="2026-02-18 12:17:52.922434555 +0000 UTC m=+1707.324535871" observedRunningTime="2026-02-18 12:17:54.062594196 +0000 UTC m=+1708.464695512" watchObservedRunningTime="2026-02-18 12:17:54.063493002 +0000 UTC m=+1708.465594318" Feb 18 12:17:55 crc kubenswrapper[4717]: I0218 12:17:55.036920 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:17:55 crc kubenswrapper[4717]: E0218 12:17:55.037286 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:17:58 crc kubenswrapper[4717]: I0218 12:17:58.048884 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-pdj8h"] Feb 18 12:17:58 crc kubenswrapper[4717]: I0218 12:17:58.058475 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-pdj8h"] Feb 18 12:17:59 crc kubenswrapper[4717]: I0218 12:17:59.059090 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70292f47-9494-42eb-a5ae-041c4bfc01ea" path="/var/lib/kubelet/pods/70292f47-9494-42eb-a5ae-041c4bfc01ea/volumes" Feb 18 12:18:09 crc kubenswrapper[4717]: I0218 12:18:09.037418 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:18:09 crc kubenswrapper[4717]: E0218 12:18:09.038583 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:18:23 crc kubenswrapper[4717]: I0218 12:18:23.037023 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:18:23 crc kubenswrapper[4717]: E0218 12:18:23.038465 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:18:37 crc kubenswrapper[4717]: I0218 12:18:37.043520 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:18:37 crc kubenswrapper[4717]: E0218 12:18:37.045131 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:18:39 crc kubenswrapper[4717]: I0218 12:18:39.732744 4717 scope.go:117] "RemoveContainer" containerID="adbb7e4d109a273f3bcf5a09cb092a07f155eb6e064a9c10d3c2becb1ad5f41d" Feb 18 12:18:39 crc kubenswrapper[4717]: I0218 12:18:39.784781 4717 scope.go:117] "RemoveContainer" containerID="07d30e4c9eded45add94f59d8c81ef3846f9a2962ab1a8a899fd7e811abb205d" Feb 18 12:18:39 crc kubenswrapper[4717]: I0218 12:18:39.840671 4717 scope.go:117] "RemoveContainer" containerID="a254e6c74bfffe4657268329b67107f0c65112ce21409e3a6599de37ad3c0fd6" Feb 18 12:18:39 crc kubenswrapper[4717]: I0218 12:18:39.892775 4717 scope.go:117] "RemoveContainer" containerID="b6d361baaeb1cb58caf866b6899049d95c3e7d9c5fdd95f27715ef2197ee66db" Feb 18 12:18:39 crc kubenswrapper[4717]: I0218 12:18:39.942234 4717 scope.go:117] "RemoveContainer" containerID="e154dd36b950ad82ade8436531d32dafe482b05fb9aa61ed908a352e13eae2ac" Feb 18 12:18:43 crc kubenswrapper[4717]: I0218 12:18:43.067410 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6jnqg"] Feb 18 12:18:43 crc kubenswrapper[4717]: I0218 12:18:43.080018 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-j4pk2"] Feb 18 12:18:43 crc kubenswrapper[4717]: I0218 12:18:43.091373 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-z5gs7"] Feb 18 12:18:43 crc kubenswrapper[4717]: I0218 12:18:43.101733 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6jnqg"] Feb 18 12:18:43 crc kubenswrapper[4717]: I0218 12:18:43.110524 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-z5gs7"] Feb 18 12:18:43 crc kubenswrapper[4717]: I0218 12:18:43.119939 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-j4pk2"] Feb 18 12:18:44 crc kubenswrapper[4717]: I0218 12:18:44.033523 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0466-account-create-update-m65cg"] Feb 18 12:18:44 crc kubenswrapper[4717]: I0218 12:18:44.053042 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7188-account-create-update-vmgn4"] Feb 18 12:18:44 crc kubenswrapper[4717]: I0218 12:18:44.066208 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3290-account-create-update-rg9fw"] Feb 18 12:18:44 crc kubenswrapper[4717]: I0218 12:18:44.077376 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7188-account-create-update-vmgn4"] Feb 18 12:18:44 crc kubenswrapper[4717]: I0218 12:18:44.088785 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0466-account-create-update-m65cg"] Feb 18 12:18:44 crc kubenswrapper[4717]: I0218 12:18:44.098352 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3290-account-create-update-rg9fw"] Feb 18 12:18:45 crc kubenswrapper[4717]: I0218 12:18:45.048994 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb8ac16-ce77-4ce1-badf-2d4d610757f3" path="/var/lib/kubelet/pods/0bb8ac16-ce77-4ce1-badf-2d4d610757f3/volumes" Feb 18 12:18:45 crc kubenswrapper[4717]: I0218 12:18:45.050126 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a354e546-d49a-4825-a255-ce8888c40e42" path="/var/lib/kubelet/pods/a354e546-d49a-4825-a255-ce8888c40e42/volumes" Feb 18 12:18:45 crc kubenswrapper[4717]: I0218 12:18:45.050684 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ff3a55-d178-47f2-9d17-069494943080" path="/var/lib/kubelet/pods/a3ff3a55-d178-47f2-9d17-069494943080/volumes" Feb 18 12:18:45 crc kubenswrapper[4717]: I0218 12:18:45.051334 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ada73f5c-ac32-44b6-9af8-dbc560004935" path="/var/lib/kubelet/pods/ada73f5c-ac32-44b6-9af8-dbc560004935/volumes" Feb 18 12:18:45 crc kubenswrapper[4717]: I0218 12:18:45.052418 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12eeb3e-8069-404c-ab57-9a182bd555e4" path="/var/lib/kubelet/pods/e12eeb3e-8069-404c-ab57-9a182bd555e4/volumes" Feb 18 12:18:45 crc kubenswrapper[4717]: I0218 12:18:45.053483 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe18f0e8-9d43-49ba-afd5-854b7540e855" path="/var/lib/kubelet/pods/fe18f0e8-9d43-49ba-afd5-854b7540e855/volumes" Feb 18 12:18:49 crc kubenswrapper[4717]: I0218 12:18:49.037818 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:18:49 crc kubenswrapper[4717]: E0218 12:18:49.038627 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:19:03 crc kubenswrapper[4717]: I0218 12:19:03.038023 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:19:03 crc kubenswrapper[4717]: E0218 12:19:03.041029 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:19:09 crc kubenswrapper[4717]: I0218 12:19:09.764397 4717 generic.go:334] "Generic (PLEG): container finished" podID="16dc15ac-c4c3-4d90-8fd2-20054f92b894" containerID="1acd865a29898262c7fc13630180a71c4c0939f912cbafc4153440d79a5fc288" exitCode=0 Feb 18 12:19:09 crc kubenswrapper[4717]: I0218 12:19:09.764506 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" event={"ID":"16dc15ac-c4c3-4d90-8fd2-20054f92b894","Type":"ContainerDied","Data":"1acd865a29898262c7fc13630180a71c4c0939f912cbafc4153440d79a5fc288"} Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.192878 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.295945 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-inventory\") pod \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.296152 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-ssh-key-openstack-edpm-ipam\") pod \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.296249 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t79d7\" (UniqueName: \"kubernetes.io/projected/16dc15ac-c4c3-4d90-8fd2-20054f92b894-kube-api-access-t79d7\") pod \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\" (UID: \"16dc15ac-c4c3-4d90-8fd2-20054f92b894\") " Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.301852 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16dc15ac-c4c3-4d90-8fd2-20054f92b894-kube-api-access-t79d7" (OuterVolumeSpecName: "kube-api-access-t79d7") pod "16dc15ac-c4c3-4d90-8fd2-20054f92b894" (UID: "16dc15ac-c4c3-4d90-8fd2-20054f92b894"). InnerVolumeSpecName "kube-api-access-t79d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.326151 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-inventory" (OuterVolumeSpecName: "inventory") pod "16dc15ac-c4c3-4d90-8fd2-20054f92b894" (UID: "16dc15ac-c4c3-4d90-8fd2-20054f92b894"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.327126 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "16dc15ac-c4c3-4d90-8fd2-20054f92b894" (UID: "16dc15ac-c4c3-4d90-8fd2-20054f92b894"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.398753 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.398804 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16dc15ac-c4c3-4d90-8fd2-20054f92b894-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.398821 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t79d7\" (UniqueName: \"kubernetes.io/projected/16dc15ac-c4c3-4d90-8fd2-20054f92b894-kube-api-access-t79d7\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.831096 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" event={"ID":"16dc15ac-c4c3-4d90-8fd2-20054f92b894","Type":"ContainerDied","Data":"4bfcbb5957001b8de25a8a87024826f05d3a819eb368ff3e137f1c2f3170e32e"} Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.831158 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bfcbb5957001b8de25a8a87024826f05d3a819eb368ff3e137f1c2f3170e32e" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.831195 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.888436 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2"] Feb 18 12:19:11 crc kubenswrapper[4717]: E0218 12:19:11.889104 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16dc15ac-c4c3-4d90-8fd2-20054f92b894" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.889133 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="16dc15ac-c4c3-4d90-8fd2-20054f92b894" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.889453 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="16dc15ac-c4c3-4d90-8fd2-20054f92b894" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.890379 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.893957 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.894119 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.894141 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.897541 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2"] Feb 18 12:19:11 crc kubenswrapper[4717]: I0218 12:19:11.897947 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.015229 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm5w9\" (UniqueName: \"kubernetes.io/projected/2a2cdadf-7168-4073-ac4d-68893d4f61de-kube-api-access-lm5w9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5sft2\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.015360 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5sft2\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.015588 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5sft2\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.118188 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5sft2\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.118502 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm5w9\" (UniqueName: \"kubernetes.io/projected/2a2cdadf-7168-4073-ac4d-68893d4f61de-kube-api-access-lm5w9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5sft2\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.118585 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5sft2\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.124550 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5sft2\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.125079 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5sft2\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.139855 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm5w9\" (UniqueName: \"kubernetes.io/projected/2a2cdadf-7168-4073-ac4d-68893d4f61de-kube-api-access-lm5w9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5sft2\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.216344 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.787246 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2"] Feb 18 12:19:12 crc kubenswrapper[4717]: I0218 12:19:12.844850 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" event={"ID":"2a2cdadf-7168-4073-ac4d-68893d4f61de","Type":"ContainerStarted","Data":"99f5caadf170204da62fac57b93c4c79f8597db8f9299812a2f9e6ba002f33ad"} Feb 18 12:19:13 crc kubenswrapper[4717]: I0218 12:19:13.856194 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" event={"ID":"2a2cdadf-7168-4073-ac4d-68893d4f61de","Type":"ContainerStarted","Data":"9ff23a20d11c0ed7183820b86e14238c2d77d37e9fcae75d419d177eb08fae99"} Feb 18 12:19:13 crc kubenswrapper[4717]: I0218 12:19:13.875457 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" podStartSLOduration=2.272343579 podStartE2EDuration="2.875428174s" podCreationTimestamp="2026-02-18 12:19:11 +0000 UTC" firstStartedPulling="2026-02-18 12:19:12.817298359 +0000 UTC m=+1787.219399675" lastFinishedPulling="2026-02-18 12:19:13.420382954 +0000 UTC m=+1787.822484270" observedRunningTime="2026-02-18 12:19:13.875019092 +0000 UTC m=+1788.277120408" watchObservedRunningTime="2026-02-18 12:19:13.875428174 +0000 UTC m=+1788.277529490" Feb 18 12:19:15 crc kubenswrapper[4717]: I0218 12:19:15.052821 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:19:15 crc kubenswrapper[4717]: E0218 12:19:15.053447 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:19:18 crc kubenswrapper[4717]: I0218 12:19:18.927360 4717 generic.go:334] "Generic (PLEG): container finished" podID="2a2cdadf-7168-4073-ac4d-68893d4f61de" containerID="9ff23a20d11c0ed7183820b86e14238c2d77d37e9fcae75d419d177eb08fae99" exitCode=0 Feb 18 12:19:18 crc kubenswrapper[4717]: I0218 12:19:18.927496 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" event={"ID":"2a2cdadf-7168-4073-ac4d-68893d4f61de","Type":"ContainerDied","Data":"9ff23a20d11c0ed7183820b86e14238c2d77d37e9fcae75d419d177eb08fae99"} Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.357675 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.511190 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm5w9\" (UniqueName: \"kubernetes.io/projected/2a2cdadf-7168-4073-ac4d-68893d4f61de-kube-api-access-lm5w9\") pod \"2a2cdadf-7168-4073-ac4d-68893d4f61de\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.511419 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-inventory\") pod \"2a2cdadf-7168-4073-ac4d-68893d4f61de\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.511618 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-ssh-key-openstack-edpm-ipam\") pod \"2a2cdadf-7168-4073-ac4d-68893d4f61de\" (UID: \"2a2cdadf-7168-4073-ac4d-68893d4f61de\") " Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.518085 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a2cdadf-7168-4073-ac4d-68893d4f61de-kube-api-access-lm5w9" (OuterVolumeSpecName: "kube-api-access-lm5w9") pod "2a2cdadf-7168-4073-ac4d-68893d4f61de" (UID: "2a2cdadf-7168-4073-ac4d-68893d4f61de"). InnerVolumeSpecName "kube-api-access-lm5w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.543739 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-inventory" (OuterVolumeSpecName: "inventory") pod "2a2cdadf-7168-4073-ac4d-68893d4f61de" (UID: "2a2cdadf-7168-4073-ac4d-68893d4f61de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.555142 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2a2cdadf-7168-4073-ac4d-68893d4f61de" (UID: "2a2cdadf-7168-4073-ac4d-68893d4f61de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.614763 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.614814 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a2cdadf-7168-4073-ac4d-68893d4f61de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.614836 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm5w9\" (UniqueName: \"kubernetes.io/projected/2a2cdadf-7168-4073-ac4d-68893d4f61de-kube-api-access-lm5w9\") on node \"crc\" DevicePath \"\"" Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.948784 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" event={"ID":"2a2cdadf-7168-4073-ac4d-68893d4f61de","Type":"ContainerDied","Data":"99f5caadf170204da62fac57b93c4c79f8597db8f9299812a2f9e6ba002f33ad"} Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.948836 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99f5caadf170204da62fac57b93c4c79f8597db8f9299812a2f9e6ba002f33ad" Feb 18 12:19:20 crc kubenswrapper[4717]: I0218 12:19:20.948909 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5sft2" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.049724 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kk96s"] Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.060342 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kk96s"] Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.088071 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq"] Feb 18 12:19:21 crc kubenswrapper[4717]: E0218 12:19:21.088689 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a2cdadf-7168-4073-ac4d-68893d4f61de" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.088718 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a2cdadf-7168-4073-ac4d-68893d4f61de" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.088990 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a2cdadf-7168-4073-ac4d-68893d4f61de" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.089976 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.093003 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.093499 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.095085 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.100616 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.102066 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq"] Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.226332 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8dcqq\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.226399 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdclh\" (UniqueName: \"kubernetes.io/projected/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-kube-api-access-qdclh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8dcqq\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.226807 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8dcqq\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.329529 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8dcqq\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.329612 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdclh\" (UniqueName: \"kubernetes.io/projected/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-kube-api-access-qdclh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8dcqq\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.329736 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8dcqq\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.335711 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8dcqq\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.337038 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8dcqq\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.369201 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdclh\" (UniqueName: \"kubernetes.io/projected/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-kube-api-access-qdclh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8dcqq\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.410649 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:19:21 crc kubenswrapper[4717]: I0218 12:19:21.966481 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq"] Feb 18 12:19:21 crc kubenswrapper[4717]: W0218 12:19:21.977657 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2da22c1_d4ae_46e5_919a_b69a2a8807e1.slice/crio-9c982c0651984fec15f1980d05b764b28d1cb5a41141236d79abc5df2f9de103 WatchSource:0}: Error finding container 9c982c0651984fec15f1980d05b764b28d1cb5a41141236d79abc5df2f9de103: Status 404 returned error can't find the container with id 9c982c0651984fec15f1980d05b764b28d1cb5a41141236d79abc5df2f9de103 Feb 18 12:19:22 crc kubenswrapper[4717]: I0218 12:19:22.967209 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" event={"ID":"a2da22c1-d4ae-46e5-919a-b69a2a8807e1","Type":"ContainerStarted","Data":"9c982c0651984fec15f1980d05b764b28d1cb5a41141236d79abc5df2f9de103"} Feb 18 12:19:23 crc kubenswrapper[4717]: I0218 12:19:23.049699 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba2e1766-bfe0-4a06-bb70-833b33300ec4" path="/var/lib/kubelet/pods/ba2e1766-bfe0-4a06-bb70-833b33300ec4/volumes" Feb 18 12:19:23 crc kubenswrapper[4717]: I0218 12:19:23.980878 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" event={"ID":"a2da22c1-d4ae-46e5-919a-b69a2a8807e1","Type":"ContainerStarted","Data":"705b8c1691669a4425a15d079fa5e207ac4ad758d2f4e6dd51bcca45d4fed662"} Feb 18 12:19:23 crc kubenswrapper[4717]: I0218 12:19:23.999206 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" podStartSLOduration=1.745798075 podStartE2EDuration="2.999179515s" podCreationTimestamp="2026-02-18 12:19:21 +0000 UTC" firstStartedPulling="2026-02-18 12:19:21.981551759 +0000 UTC m=+1796.383653075" lastFinishedPulling="2026-02-18 12:19:23.234933199 +0000 UTC m=+1797.637034515" observedRunningTime="2026-02-18 12:19:23.998459864 +0000 UTC m=+1798.400561180" watchObservedRunningTime="2026-02-18 12:19:23.999179515 +0000 UTC m=+1798.401280841" Feb 18 12:19:29 crc kubenswrapper[4717]: I0218 12:19:29.038377 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:19:29 crc kubenswrapper[4717]: E0218 12:19:29.039559 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:19:40 crc kubenswrapper[4717]: I0218 12:19:40.037326 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:19:40 crc kubenswrapper[4717]: E0218 12:19:40.038530 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:19:40 crc kubenswrapper[4717]: I0218 12:19:40.114648 4717 scope.go:117] "RemoveContainer" containerID="0ac44b0853d0b61a571326e4b6962a5424cf1f0221e07e34c5ef08d862e2d3ba" Feb 18 12:19:40 crc kubenswrapper[4717]: I0218 12:19:40.156215 4717 scope.go:117] "RemoveContainer" containerID="b737f44033956b1ef5d694279c6bdc7ff4ae897d5b739497b5dde5f1166cca7c" Feb 18 12:19:40 crc kubenswrapper[4717]: I0218 12:19:40.202113 4717 scope.go:117] "RemoveContainer" containerID="1cf9fbbd529a32a4ca3ab77705d6f29f37d26bf9aa75c488d20b9c5693ed36e2" Feb 18 12:19:40 crc kubenswrapper[4717]: I0218 12:19:40.256287 4717 scope.go:117] "RemoveContainer" containerID="be01a91fc59414df545e3b7a7eb41d892e94a99b3d7dc4c5d68a74fdacf07184" Feb 18 12:19:40 crc kubenswrapper[4717]: I0218 12:19:40.302918 4717 scope.go:117] "RemoveContainer" containerID="82bfe9414e2287220f17d75be09b6c40e8a4945cc2f2ddfd9dedaedac4c7c258" Feb 18 12:19:40 crc kubenswrapper[4717]: I0218 12:19:40.376049 4717 scope.go:117] "RemoveContainer" containerID="8268418fc337d5566944009346998b60339d7fa249c2abf824b7d0733197da11" Feb 18 12:19:40 crc kubenswrapper[4717]: I0218 12:19:40.401446 4717 scope.go:117] "RemoveContainer" containerID="a7ac6d63963063aa0411fb3ea96ea0b7e05f9efb8616fd62be45aed561c4b6e6" Feb 18 12:19:44 crc kubenswrapper[4717]: I0218 12:19:44.031766 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rrq7"] Feb 18 12:19:44 crc kubenswrapper[4717]: I0218 12:19:44.042679 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8rrq7"] Feb 18 12:19:45 crc kubenswrapper[4717]: I0218 12:19:45.050641 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c00704-11f3-4b61-8964-98cd2f711987" path="/var/lib/kubelet/pods/91c00704-11f3-4b61-8964-98cd2f711987/volumes" Feb 18 12:19:45 crc kubenswrapper[4717]: I0218 12:19:45.051566 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2lj58"] Feb 18 12:19:45 crc kubenswrapper[4717]: I0218 12:19:45.051601 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2lj58"] Feb 18 12:19:47 crc kubenswrapper[4717]: I0218 12:19:47.064242 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e776a05b-0cc9-43cc-9554-c534022da512" path="/var/lib/kubelet/pods/e776a05b-0cc9-43cc-9554-c534022da512/volumes" Feb 18 12:19:55 crc kubenswrapper[4717]: I0218 12:19:55.037253 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:19:55 crc kubenswrapper[4717]: E0218 12:19:55.038582 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:20:01 crc kubenswrapper[4717]: I0218 12:20:01.381484 4717 generic.go:334] "Generic (PLEG): container finished" podID="a2da22c1-d4ae-46e5-919a-b69a2a8807e1" containerID="705b8c1691669a4425a15d079fa5e207ac4ad758d2f4e6dd51bcca45d4fed662" exitCode=0 Feb 18 12:20:01 crc kubenswrapper[4717]: I0218 12:20:01.381578 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" event={"ID":"a2da22c1-d4ae-46e5-919a-b69a2a8807e1","Type":"ContainerDied","Data":"705b8c1691669a4425a15d079fa5e207ac4ad758d2f4e6dd51bcca45d4fed662"} Feb 18 12:20:02 crc kubenswrapper[4717]: I0218 12:20:02.854571 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.006667 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-ssh-key-openstack-edpm-ipam\") pod \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.006783 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-inventory\") pod \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.006938 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdclh\" (UniqueName: \"kubernetes.io/projected/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-kube-api-access-qdclh\") pod \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\" (UID: \"a2da22c1-d4ae-46e5-919a-b69a2a8807e1\") " Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.036540 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-kube-api-access-qdclh" (OuterVolumeSpecName: "kube-api-access-qdclh") pod "a2da22c1-d4ae-46e5-919a-b69a2a8807e1" (UID: "a2da22c1-d4ae-46e5-919a-b69a2a8807e1"). InnerVolumeSpecName "kube-api-access-qdclh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.040030 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-inventory" (OuterVolumeSpecName: "inventory") pod "a2da22c1-d4ae-46e5-919a-b69a2a8807e1" (UID: "a2da22c1-d4ae-46e5-919a-b69a2a8807e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.043051 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a2da22c1-d4ae-46e5-919a-b69a2a8807e1" (UID: "a2da22c1-d4ae-46e5-919a-b69a2a8807e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.109807 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.109847 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.109859 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdclh\" (UniqueName: \"kubernetes.io/projected/a2da22c1-d4ae-46e5-919a-b69a2a8807e1-kube-api-access-qdclh\") on node \"crc\" DevicePath \"\"" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.405049 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" event={"ID":"a2da22c1-d4ae-46e5-919a-b69a2a8807e1","Type":"ContainerDied","Data":"9c982c0651984fec15f1980d05b764b28d1cb5a41141236d79abc5df2f9de103"} Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.405096 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c982c0651984fec15f1980d05b764b28d1cb5a41141236d79abc5df2f9de103" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.405134 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8dcqq" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.585662 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx"] Feb 18 12:20:03 crc kubenswrapper[4717]: E0218 12:20:03.586311 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2da22c1-d4ae-46e5-919a-b69a2a8807e1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.586346 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2da22c1-d4ae-46e5-919a-b69a2a8807e1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.586630 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2da22c1-d4ae-46e5-919a-b69a2a8807e1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.587626 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.590528 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.590537 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.590664 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.592575 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.596895 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx"] Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.723085 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6nq\" (UniqueName: \"kubernetes.io/projected/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-kube-api-access-rv6nq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.723276 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.723375 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.825336 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv6nq\" (UniqueName: \"kubernetes.io/projected/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-kube-api-access-rv6nq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.825450 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.825531 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.831398 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.832100 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.845707 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv6nq\" (UniqueName: \"kubernetes.io/projected/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-kube-api-access-rv6nq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:03 crc kubenswrapper[4717]: I0218 12:20:03.907224 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:04 crc kubenswrapper[4717]: I0218 12:20:04.469109 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx"] Feb 18 12:20:05 crc kubenswrapper[4717]: I0218 12:20:05.433306 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" event={"ID":"38e6fde5-49a9-46e9-bb5d-382aaf00efa7","Type":"ContainerStarted","Data":"d38d5d502d3a02c8c3091a8966837af294d4b35ad5e9b1903665dd0f35ad095f"} Feb 18 12:20:05 crc kubenswrapper[4717]: I0218 12:20:05.433637 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" event={"ID":"38e6fde5-49a9-46e9-bb5d-382aaf00efa7","Type":"ContainerStarted","Data":"be83ba43576f27cd2fcd95a2754e9dfe33feee97ef95ed0eaeec2514d228ee1b"} Feb 18 12:20:05 crc kubenswrapper[4717]: I0218 12:20:05.458201 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" podStartSLOduration=2.044924725 podStartE2EDuration="2.458182099s" podCreationTimestamp="2026-02-18 12:20:03 +0000 UTC" firstStartedPulling="2026-02-18 12:20:04.476092501 +0000 UTC m=+1838.878193817" lastFinishedPulling="2026-02-18 12:20:04.889349875 +0000 UTC m=+1839.291451191" observedRunningTime="2026-02-18 12:20:05.458010004 +0000 UTC m=+1839.860111320" watchObservedRunningTime="2026-02-18 12:20:05.458182099 +0000 UTC m=+1839.860283415" Feb 18 12:20:08 crc kubenswrapper[4717]: I0218 12:20:08.037064 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:20:08 crc kubenswrapper[4717]: E0218 12:20:08.037785 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:20:20 crc kubenswrapper[4717]: I0218 12:20:20.036973 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:20:20 crc kubenswrapper[4717]: E0218 12:20:20.037900 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:20:30 crc kubenswrapper[4717]: I0218 12:20:30.046416 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vksh9"] Feb 18 12:20:30 crc kubenswrapper[4717]: I0218 12:20:30.059375 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vksh9"] Feb 18 12:20:31 crc kubenswrapper[4717]: I0218 12:20:31.053166 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0b59b0-6221-4d9e-b59e-99b82ab9dd27" path="/var/lib/kubelet/pods/6e0b59b0-6221-4d9e-b59e-99b82ab9dd27/volumes" Feb 18 12:20:33 crc kubenswrapper[4717]: I0218 12:20:33.039447 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:20:33 crc kubenswrapper[4717]: E0218 12:20:33.040267 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:20:40 crc kubenswrapper[4717]: I0218 12:20:40.549971 4717 scope.go:117] "RemoveContainer" containerID="3a420f359ada50e6643af8858c6f43293f9ae71afd4f7ed49900185d3f6bb72c" Feb 18 12:20:40 crc kubenswrapper[4717]: I0218 12:20:40.620605 4717 scope.go:117] "RemoveContainer" containerID="66ecaaf20cf9f194565a1e8f7b3f4e9546da4a91c020cdf73195dbccdd17c4b6" Feb 18 12:20:40 crc kubenswrapper[4717]: I0218 12:20:40.664490 4717 scope.go:117] "RemoveContainer" containerID="36097929a2be8c633219df591fd6edefe007773696afcea09c375684061e58e7" Feb 18 12:20:46 crc kubenswrapper[4717]: I0218 12:20:46.036921 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:20:46 crc kubenswrapper[4717]: E0218 12:20:46.037891 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:20:49 crc kubenswrapper[4717]: I0218 12:20:49.870528 4717 generic.go:334] "Generic (PLEG): container finished" podID="38e6fde5-49a9-46e9-bb5d-382aaf00efa7" containerID="d38d5d502d3a02c8c3091a8966837af294d4b35ad5e9b1903665dd0f35ad095f" exitCode=0 Feb 18 12:20:49 crc kubenswrapper[4717]: I0218 12:20:49.870638 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" event={"ID":"38e6fde5-49a9-46e9-bb5d-382aaf00efa7","Type":"ContainerDied","Data":"d38d5d502d3a02c8c3091a8966837af294d4b35ad5e9b1903665dd0f35ad095f"} Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.273011 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.431488 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-inventory\") pod \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.431728 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv6nq\" (UniqueName: \"kubernetes.io/projected/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-kube-api-access-rv6nq\") pod \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.431793 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-ssh-key-openstack-edpm-ipam\") pod \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\" (UID: \"38e6fde5-49a9-46e9-bb5d-382aaf00efa7\") " Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.437820 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-kube-api-access-rv6nq" (OuterVolumeSpecName: "kube-api-access-rv6nq") pod "38e6fde5-49a9-46e9-bb5d-382aaf00efa7" (UID: "38e6fde5-49a9-46e9-bb5d-382aaf00efa7"). InnerVolumeSpecName "kube-api-access-rv6nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.463666 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38e6fde5-49a9-46e9-bb5d-382aaf00efa7" (UID: "38e6fde5-49a9-46e9-bb5d-382aaf00efa7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.465488 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-inventory" (OuterVolumeSpecName: "inventory") pod "38e6fde5-49a9-46e9-bb5d-382aaf00efa7" (UID: "38e6fde5-49a9-46e9-bb5d-382aaf00efa7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.534809 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv6nq\" (UniqueName: \"kubernetes.io/projected/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-kube-api-access-rv6nq\") on node \"crc\" DevicePath \"\"" Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.534883 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.534900 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38e6fde5-49a9-46e9-bb5d-382aaf00efa7-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.892709 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" event={"ID":"38e6fde5-49a9-46e9-bb5d-382aaf00efa7","Type":"ContainerDied","Data":"be83ba43576f27cd2fcd95a2754e9dfe33feee97ef95ed0eaeec2514d228ee1b"} Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.892767 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be83ba43576f27cd2fcd95a2754e9dfe33feee97ef95ed0eaeec2514d228ee1b" Feb 18 12:20:51 crc kubenswrapper[4717]: I0218 12:20:51.892866 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.091207 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-98rv7"] Feb 18 12:20:52 crc kubenswrapper[4717]: E0218 12:20:52.092022 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e6fde5-49a9-46e9-bb5d-382aaf00efa7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.092049 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e6fde5-49a9-46e9-bb5d-382aaf00efa7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.092327 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e6fde5-49a9-46e9-bb5d-382aaf00efa7" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.093154 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.096187 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.097811 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.098149 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.099697 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.107860 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-98rv7"] Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.247738 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vvtp\" (UniqueName: \"kubernetes.io/projected/c1782f70-1ae6-42da-98ad-f42f2495b261-kube-api-access-2vvtp\") pod \"ssh-known-hosts-edpm-deployment-98rv7\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.248413 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-98rv7\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.248690 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-98rv7\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.350681 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-98rv7\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.350774 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-98rv7\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.350843 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vvtp\" (UniqueName: \"kubernetes.io/projected/c1782f70-1ae6-42da-98ad-f42f2495b261-kube-api-access-2vvtp\") pod \"ssh-known-hosts-edpm-deployment-98rv7\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.360335 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-98rv7\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.360508 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-98rv7\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.369370 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vvtp\" (UniqueName: \"kubernetes.io/projected/c1782f70-1ae6-42da-98ad-f42f2495b261-kube-api-access-2vvtp\") pod \"ssh-known-hosts-edpm-deployment-98rv7\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.417713 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.988371 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-98rv7"] Feb 18 12:20:52 crc kubenswrapper[4717]: I0218 12:20:52.992893 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:20:53 crc kubenswrapper[4717]: I0218 12:20:53.919209 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" event={"ID":"c1782f70-1ae6-42da-98ad-f42f2495b261","Type":"ContainerStarted","Data":"fe6602e745225906aa0ab4611591de9647af538902ee78e86023f27b9da70366"} Feb 18 12:20:53 crc kubenswrapper[4717]: I0218 12:20:53.919567 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" event={"ID":"c1782f70-1ae6-42da-98ad-f42f2495b261","Type":"ContainerStarted","Data":"0878333d1715e81f0c5e17241a404d6ccd45d4755dfdccf5795445d95b099c92"} Feb 18 12:20:53 crc kubenswrapper[4717]: I0218 12:20:53.943051 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" podStartSLOduration=1.542731458 podStartE2EDuration="1.943019541s" podCreationTimestamp="2026-02-18 12:20:52 +0000 UTC" firstStartedPulling="2026-02-18 12:20:52.992612368 +0000 UTC m=+1887.394713684" lastFinishedPulling="2026-02-18 12:20:53.392900431 +0000 UTC m=+1887.795001767" observedRunningTime="2026-02-18 12:20:53.932774198 +0000 UTC m=+1888.334875534" watchObservedRunningTime="2026-02-18 12:20:53.943019541 +0000 UTC m=+1888.345120857" Feb 18 12:20:59 crc kubenswrapper[4717]: I0218 12:20:59.973060 4717 generic.go:334] "Generic (PLEG): container finished" podID="c1782f70-1ae6-42da-98ad-f42f2495b261" containerID="fe6602e745225906aa0ab4611591de9647af538902ee78e86023f27b9da70366" exitCode=0 Feb 18 12:20:59 crc kubenswrapper[4717]: I0218 12:20:59.973161 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" event={"ID":"c1782f70-1ae6-42da-98ad-f42f2495b261","Type":"ContainerDied","Data":"fe6602e745225906aa0ab4611591de9647af538902ee78e86023f27b9da70366"} Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.037811 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:21:01 crc kubenswrapper[4717]: E0218 12:21:01.038827 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.421249 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.569118 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-inventory-0\") pod \"c1782f70-1ae6-42da-98ad-f42f2495b261\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.569325 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vvtp\" (UniqueName: \"kubernetes.io/projected/c1782f70-1ae6-42da-98ad-f42f2495b261-kube-api-access-2vvtp\") pod \"c1782f70-1ae6-42da-98ad-f42f2495b261\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.569391 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-ssh-key-openstack-edpm-ipam\") pod \"c1782f70-1ae6-42da-98ad-f42f2495b261\" (UID: \"c1782f70-1ae6-42da-98ad-f42f2495b261\") " Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.577055 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1782f70-1ae6-42da-98ad-f42f2495b261-kube-api-access-2vvtp" (OuterVolumeSpecName: "kube-api-access-2vvtp") pod "c1782f70-1ae6-42da-98ad-f42f2495b261" (UID: "c1782f70-1ae6-42da-98ad-f42f2495b261"). InnerVolumeSpecName "kube-api-access-2vvtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.601394 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c1782f70-1ae6-42da-98ad-f42f2495b261" (UID: "c1782f70-1ae6-42da-98ad-f42f2495b261"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.604663 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c1782f70-1ae6-42da-98ad-f42f2495b261" (UID: "c1782f70-1ae6-42da-98ad-f42f2495b261"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.672219 4717 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.672313 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vvtp\" (UniqueName: \"kubernetes.io/projected/c1782f70-1ae6-42da-98ad-f42f2495b261-kube-api-access-2vvtp\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:01 crc kubenswrapper[4717]: I0218 12:21:01.672346 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1782f70-1ae6-42da-98ad-f42f2495b261-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:01.993405 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" event={"ID":"c1782f70-1ae6-42da-98ad-f42f2495b261","Type":"ContainerDied","Data":"0878333d1715e81f0c5e17241a404d6ccd45d4755dfdccf5795445d95b099c92"} Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:01.993451 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0878333d1715e81f0c5e17241a404d6ccd45d4755dfdccf5795445d95b099c92" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:01.993516 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-98rv7" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.097421 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f"] Feb 18 12:21:02 crc kubenswrapper[4717]: E0218 12:21:02.098033 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1782f70-1ae6-42da-98ad-f42f2495b261" containerName="ssh-known-hosts-edpm-deployment" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.098055 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1782f70-1ae6-42da-98ad-f42f2495b261" containerName="ssh-known-hosts-edpm-deployment" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.098386 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1782f70-1ae6-42da-98ad-f42f2495b261" containerName="ssh-known-hosts-edpm-deployment" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.099334 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.101654 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.101811 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.105763 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.106422 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.111673 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f"] Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.182863 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kjq2f\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.183674 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95vxd\" (UniqueName: \"kubernetes.io/projected/32a22361-b7f3-4429-a590-edecf026891c-kube-api-access-95vxd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kjq2f\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.184061 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kjq2f\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.287127 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kjq2f\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.287343 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kjq2f\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.287384 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95vxd\" (UniqueName: \"kubernetes.io/projected/32a22361-b7f3-4429-a590-edecf026891c-kube-api-access-95vxd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kjq2f\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.294966 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kjq2f\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.295154 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kjq2f\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.307600 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95vxd\" (UniqueName: \"kubernetes.io/projected/32a22361-b7f3-4429-a590-edecf026891c-kube-api-access-95vxd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kjq2f\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:02 crc kubenswrapper[4717]: I0218 12:21:02.416493 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:03 crc kubenswrapper[4717]: I0218 12:21:03.012617 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f"] Feb 18 12:21:04 crc kubenswrapper[4717]: I0218 12:21:04.021140 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" event={"ID":"32a22361-b7f3-4429-a590-edecf026891c","Type":"ContainerStarted","Data":"141888b4a881b859db436ed652102ae7116100e24e4405177ea5e39bde9780cf"} Feb 18 12:21:05 crc kubenswrapper[4717]: I0218 12:21:05.032209 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" event={"ID":"32a22361-b7f3-4429-a590-edecf026891c","Type":"ContainerStarted","Data":"b2fb2fe8c13c3b1bc0e14298ca57ac0bb31d9744717175af60eb4c8ce3f194ea"} Feb 18 12:21:05 crc kubenswrapper[4717]: I0218 12:21:05.063586 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" podStartSLOduration=1.907851255 podStartE2EDuration="3.063552911s" podCreationTimestamp="2026-02-18 12:21:02 +0000 UTC" firstStartedPulling="2026-02-18 12:21:03.016676147 +0000 UTC m=+1897.418777483" lastFinishedPulling="2026-02-18 12:21:04.172377823 +0000 UTC m=+1898.574479139" observedRunningTime="2026-02-18 12:21:05.054615875 +0000 UTC m=+1899.456717221" watchObservedRunningTime="2026-02-18 12:21:05.063552911 +0000 UTC m=+1899.465654267" Feb 18 12:21:12 crc kubenswrapper[4717]: I0218 12:21:12.037867 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:21:12 crc kubenswrapper[4717]: E0218 12:21:12.039073 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:21:12 crc kubenswrapper[4717]: I0218 12:21:12.107239 4717 generic.go:334] "Generic (PLEG): container finished" podID="32a22361-b7f3-4429-a590-edecf026891c" containerID="b2fb2fe8c13c3b1bc0e14298ca57ac0bb31d9744717175af60eb4c8ce3f194ea" exitCode=0 Feb 18 12:21:12 crc kubenswrapper[4717]: I0218 12:21:12.107347 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" event={"ID":"32a22361-b7f3-4429-a590-edecf026891c","Type":"ContainerDied","Data":"b2fb2fe8c13c3b1bc0e14298ca57ac0bb31d9744717175af60eb4c8ce3f194ea"} Feb 18 12:21:13 crc kubenswrapper[4717]: I0218 12:21:13.519040 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:13 crc kubenswrapper[4717]: I0218 12:21:13.627559 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-inventory\") pod \"32a22361-b7f3-4429-a590-edecf026891c\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " Feb 18 12:21:13 crc kubenswrapper[4717]: I0218 12:21:13.627702 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95vxd\" (UniqueName: \"kubernetes.io/projected/32a22361-b7f3-4429-a590-edecf026891c-kube-api-access-95vxd\") pod \"32a22361-b7f3-4429-a590-edecf026891c\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " Feb 18 12:21:13 crc kubenswrapper[4717]: I0218 12:21:13.627872 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-ssh-key-openstack-edpm-ipam\") pod \"32a22361-b7f3-4429-a590-edecf026891c\" (UID: \"32a22361-b7f3-4429-a590-edecf026891c\") " Feb 18 12:21:13 crc kubenswrapper[4717]: I0218 12:21:13.633498 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a22361-b7f3-4429-a590-edecf026891c-kube-api-access-95vxd" (OuterVolumeSpecName: "kube-api-access-95vxd") pod "32a22361-b7f3-4429-a590-edecf026891c" (UID: "32a22361-b7f3-4429-a590-edecf026891c"). InnerVolumeSpecName "kube-api-access-95vxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:21:13 crc kubenswrapper[4717]: I0218 12:21:13.656189 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-inventory" (OuterVolumeSpecName: "inventory") pod "32a22361-b7f3-4429-a590-edecf026891c" (UID: "32a22361-b7f3-4429-a590-edecf026891c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:13 crc kubenswrapper[4717]: I0218 12:21:13.661596 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "32a22361-b7f3-4429-a590-edecf026891c" (UID: "32a22361-b7f3-4429-a590-edecf026891c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:13 crc kubenswrapper[4717]: I0218 12:21:13.731017 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95vxd\" (UniqueName: \"kubernetes.io/projected/32a22361-b7f3-4429-a590-edecf026891c-kube-api-access-95vxd\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:13 crc kubenswrapper[4717]: I0218 12:21:13.731064 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:13 crc kubenswrapper[4717]: I0218 12:21:13.731076 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32a22361-b7f3-4429-a590-edecf026891c-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.130208 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" event={"ID":"32a22361-b7f3-4429-a590-edecf026891c","Type":"ContainerDied","Data":"141888b4a881b859db436ed652102ae7116100e24e4405177ea5e39bde9780cf"} Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.130297 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="141888b4a881b859db436ed652102ae7116100e24e4405177ea5e39bde9780cf" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.130300 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kjq2f" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.223894 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld"] Feb 18 12:21:14 crc kubenswrapper[4717]: E0218 12:21:14.224494 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a22361-b7f3-4429-a590-edecf026891c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.224515 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a22361-b7f3-4429-a590-edecf026891c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.224796 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a22361-b7f3-4429-a590-edecf026891c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.225683 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.230679 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.232406 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.232541 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.232773 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.244570 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld"] Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.343830 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nslld\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.343919 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8msj\" (UniqueName: \"kubernetes.io/projected/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-kube-api-access-g8msj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nslld\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.343997 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nslld\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.446369 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nslld\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.446520 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8msj\" (UniqueName: \"kubernetes.io/projected/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-kube-api-access-g8msj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nslld\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.446680 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nslld\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.451148 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nslld\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.454077 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nslld\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.466239 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8msj\" (UniqueName: \"kubernetes.io/projected/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-kube-api-access-g8msj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nslld\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:14 crc kubenswrapper[4717]: I0218 12:21:14.547115 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:15 crc kubenswrapper[4717]: I0218 12:21:15.051729 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld"] Feb 18 12:21:15 crc kubenswrapper[4717]: I0218 12:21:15.146812 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" event={"ID":"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47","Type":"ContainerStarted","Data":"e8809efa26f209efeeddccb992d77bc90bdc4c7bdf9304c4b15961ec1e02ebcf"} Feb 18 12:21:16 crc kubenswrapper[4717]: I0218 12:21:16.160452 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" event={"ID":"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47","Type":"ContainerStarted","Data":"3fc351b08eea0a0a593c4583fe3b42c26ef70150360fb19882c7218c33aba4c3"} Feb 18 12:21:16 crc kubenswrapper[4717]: I0218 12:21:16.179432 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" podStartSLOduration=1.761922312 podStartE2EDuration="2.179407547s" podCreationTimestamp="2026-02-18 12:21:14 +0000 UTC" firstStartedPulling="2026-02-18 12:21:15.054765909 +0000 UTC m=+1909.456867225" lastFinishedPulling="2026-02-18 12:21:15.472251134 +0000 UTC m=+1909.874352460" observedRunningTime="2026-02-18 12:21:16.178300325 +0000 UTC m=+1910.580401651" watchObservedRunningTime="2026-02-18 12:21:16.179407547 +0000 UTC m=+1910.581508863" Feb 18 12:21:23 crc kubenswrapper[4717]: I0218 12:21:23.037488 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:21:23 crc kubenswrapper[4717]: E0218 12:21:23.039577 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:21:25 crc kubenswrapper[4717]: I0218 12:21:25.251356 4717 generic.go:334] "Generic (PLEG): container finished" podID="0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47" containerID="3fc351b08eea0a0a593c4583fe3b42c26ef70150360fb19882c7218c33aba4c3" exitCode=0 Feb 18 12:21:25 crc kubenswrapper[4717]: I0218 12:21:25.251542 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" event={"ID":"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47","Type":"ContainerDied","Data":"3fc351b08eea0a0a593c4583fe3b42c26ef70150360fb19882c7218c33aba4c3"} Feb 18 12:21:26 crc kubenswrapper[4717]: I0218 12:21:26.678900 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:26 crc kubenswrapper[4717]: I0218 12:21:26.830872 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-ssh-key-openstack-edpm-ipam\") pod \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " Feb 18 12:21:26 crc kubenswrapper[4717]: I0218 12:21:26.831650 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8msj\" (UniqueName: \"kubernetes.io/projected/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-kube-api-access-g8msj\") pod \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " Feb 18 12:21:26 crc kubenswrapper[4717]: I0218 12:21:26.831774 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-inventory\") pod \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\" (UID: \"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47\") " Feb 18 12:21:26 crc kubenswrapper[4717]: I0218 12:21:26.838211 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-kube-api-access-g8msj" (OuterVolumeSpecName: "kube-api-access-g8msj") pod "0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47" (UID: "0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47"). InnerVolumeSpecName "kube-api-access-g8msj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:21:26 crc kubenswrapper[4717]: I0218 12:21:26.859791 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-inventory" (OuterVolumeSpecName: "inventory") pod "0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47" (UID: "0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:26 crc kubenswrapper[4717]: I0218 12:21:26.867570 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47" (UID: "0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:21:26 crc kubenswrapper[4717]: I0218 12:21:26.935159 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:26 crc kubenswrapper[4717]: I0218 12:21:26.935220 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:26 crc kubenswrapper[4717]: I0218 12:21:26.935233 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8msj\" (UniqueName: \"kubernetes.io/projected/0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47-kube-api-access-g8msj\") on node \"crc\" DevicePath \"\"" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.272302 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" event={"ID":"0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47","Type":"ContainerDied","Data":"e8809efa26f209efeeddccb992d77bc90bdc4c7bdf9304c4b15961ec1e02ebcf"} Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.272368 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8809efa26f209efeeddccb992d77bc90bdc4c7bdf9304c4b15961ec1e02ebcf" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.272414 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nslld" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.364456 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5"] Feb 18 12:21:27 crc kubenswrapper[4717]: E0218 12:21:27.364925 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.364942 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.365127 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.366358 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.375922 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.376203 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.376600 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.376610 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.376607 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.376674 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.376819 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.376900 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.386437 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5"] Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.446468 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.446537 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9cvb\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-kube-api-access-r9cvb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.446584 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.446739 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.446839 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.446911 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.446956 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.447100 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.447252 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.447350 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.447416 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.447591 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.447665 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.447828 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.549732 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.549825 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.549868 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.549907 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.549963 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.549996 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.550083 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.550111 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.550161 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9cvb\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-kube-api-access-r9cvb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.550195 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.550229 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.550290 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.550334 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.550373 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.556791 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.557006 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.557037 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.557356 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.557614 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.557735 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.558039 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.560549 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.560575 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.561079 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.561903 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.562166 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.562293 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.569212 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9cvb\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-kube-api-access-r9cvb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:27 crc kubenswrapper[4717]: I0218 12:21:27.688288 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:21:28 crc kubenswrapper[4717]: I0218 12:21:28.245957 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5"] Feb 18 12:21:28 crc kubenswrapper[4717]: W0218 12:21:28.246788 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd992beb_fe74_4076_8233_3bdc67b5de99.slice/crio-f48ca6d0cb08541069ab1f3fcf70d2a3735383ea42480ac651717250d2b5ba19 WatchSource:0}: Error finding container f48ca6d0cb08541069ab1f3fcf70d2a3735383ea42480ac651717250d2b5ba19: Status 404 returned error can't find the container with id f48ca6d0cb08541069ab1f3fcf70d2a3735383ea42480ac651717250d2b5ba19 Feb 18 12:21:28 crc kubenswrapper[4717]: I0218 12:21:28.283624 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" event={"ID":"fd992beb-fe74-4076-8233-3bdc67b5de99","Type":"ContainerStarted","Data":"f48ca6d0cb08541069ab1f3fcf70d2a3735383ea42480ac651717250d2b5ba19"} Feb 18 12:21:29 crc kubenswrapper[4717]: I0218 12:21:29.294487 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" event={"ID":"fd992beb-fe74-4076-8233-3bdc67b5de99","Type":"ContainerStarted","Data":"3769d9c747c82a078c9da7a34faba758f7082b8adee1f8d0505476fce482218f"} Feb 18 12:21:29 crc kubenswrapper[4717]: I0218 12:21:29.321396 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" podStartSLOduration=1.90159503 podStartE2EDuration="2.321366741s" podCreationTimestamp="2026-02-18 12:21:27 +0000 UTC" firstStartedPulling="2026-02-18 12:21:28.250158462 +0000 UTC m=+1922.652259778" lastFinishedPulling="2026-02-18 12:21:28.669930173 +0000 UTC m=+1923.072031489" observedRunningTime="2026-02-18 12:21:29.321064663 +0000 UTC m=+1923.723165989" watchObservedRunningTime="2026-02-18 12:21:29.321366741 +0000 UTC m=+1923.723468067" Feb 18 12:21:37 crc kubenswrapper[4717]: I0218 12:21:37.045512 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:21:37 crc kubenswrapper[4717]: E0218 12:21:37.046935 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:21:50 crc kubenswrapper[4717]: I0218 12:21:50.037360 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:21:50 crc kubenswrapper[4717]: E0218 12:21:50.039895 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:22:02 crc kubenswrapper[4717]: I0218 12:22:02.614110 4717 generic.go:334] "Generic (PLEG): container finished" podID="fd992beb-fe74-4076-8233-3bdc67b5de99" containerID="3769d9c747c82a078c9da7a34faba758f7082b8adee1f8d0505476fce482218f" exitCode=0 Feb 18 12:22:02 crc kubenswrapper[4717]: I0218 12:22:02.614215 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" event={"ID":"fd992beb-fe74-4076-8233-3bdc67b5de99","Type":"ContainerDied","Data":"3769d9c747c82a078c9da7a34faba758f7082b8adee1f8d0505476fce482218f"} Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.036711 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:22:04 crc kubenswrapper[4717]: E0218 12:22:04.037509 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.059712 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.218756 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-repo-setup-combined-ca-bundle\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.218919 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.218965 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.218999 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9cvb\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-kube-api-access-r9cvb\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.219053 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-libvirt-combined-ca-bundle\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.219102 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.219150 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ovn-combined-ca-bundle\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.219183 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-neutron-metadata-combined-ca-bundle\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.219234 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-nova-combined-ca-bundle\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.219361 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-inventory\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.219415 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-telemetry-combined-ca-bundle\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.219510 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ssh-key-openstack-edpm-ipam\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.219570 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-bootstrap-combined-ca-bundle\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.219608 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"fd992beb-fe74-4076-8233-3bdc67b5de99\" (UID: \"fd992beb-fe74-4076-8233-3bdc67b5de99\") " Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.226559 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.226584 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.226915 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.226977 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.227827 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.228495 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.228680 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.229792 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.231740 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.231818 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-kube-api-access-r9cvb" (OuterVolumeSpecName: "kube-api-access-r9cvb") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "kube-api-access-r9cvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.232788 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.232798 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.257957 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-inventory" (OuterVolumeSpecName: "inventory") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.260971 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd992beb-fe74-4076-8233-3bdc67b5de99" (UID: "fd992beb-fe74-4076-8233-3bdc67b5de99"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322822 4717 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322862 4717 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322877 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9cvb\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-kube-api-access-r9cvb\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322888 4717 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322898 4717 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322908 4717 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322917 4717 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322929 4717 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322941 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322953 4717 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322964 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322977 4717 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322987 4717 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd992beb-fe74-4076-8233-3bdc67b5de99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.322997 4717 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd992beb-fe74-4076-8233-3bdc67b5de99-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.638055 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" event={"ID":"fd992beb-fe74-4076-8233-3bdc67b5de99","Type":"ContainerDied","Data":"f48ca6d0cb08541069ab1f3fcf70d2a3735383ea42480ac651717250d2b5ba19"} Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.638113 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f48ca6d0cb08541069ab1f3fcf70d2a3735383ea42480ac651717250d2b5ba19" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.638168 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.891644 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9"] Feb 18 12:22:04 crc kubenswrapper[4717]: E0218 12:22:04.892216 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd992beb-fe74-4076-8233-3bdc67b5de99" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.892242 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd992beb-fe74-4076-8233-3bdc67b5de99" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.893119 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd992beb-fe74-4076-8233-3bdc67b5de99" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.894222 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.903191 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.903521 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.903642 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.903690 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.909871 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9"] Feb 18 12:22:04 crc kubenswrapper[4717]: I0218 12:22:04.917430 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.036195 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.037062 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.037375 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.037592 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.037652 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkjsw\" (UniqueName: \"kubernetes.io/projected/5cdc94da-0dcb-40a3-9800-2655bae295cb-kube-api-access-tkjsw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.140128 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.140904 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.141057 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.141172 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.141278 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkjsw\" (UniqueName: \"kubernetes.io/projected/5cdc94da-0dcb-40a3-9800-2655bae295cb-kube-api-access-tkjsw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.142679 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.146020 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.146231 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.148027 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.161472 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkjsw\" (UniqueName: \"kubernetes.io/projected/5cdc94da-0dcb-40a3-9800-2655bae295cb-kube-api-access-tkjsw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pnbv9\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.233449 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:22:05 crc kubenswrapper[4717]: I0218 12:22:05.790460 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9"] Feb 18 12:22:06 crc kubenswrapper[4717]: I0218 12:22:06.667188 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" event={"ID":"5cdc94da-0dcb-40a3-9800-2655bae295cb","Type":"ContainerStarted","Data":"a3b5e74bb9949e03baf4af94b2befc19dfb066d1dd483902a74117045a77e29a"} Feb 18 12:22:06 crc kubenswrapper[4717]: I0218 12:22:06.667588 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" event={"ID":"5cdc94da-0dcb-40a3-9800-2655bae295cb","Type":"ContainerStarted","Data":"3bc11603905e2cd8241b9a8fe48af732dd50222de919fa045fb0aa98cab152d8"} Feb 18 12:22:17 crc kubenswrapper[4717]: I0218 12:22:17.043480 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:22:17 crc kubenswrapper[4717]: E0218 12:22:17.044286 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:22:33 crc kubenswrapper[4717]: I0218 12:22:33.036773 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:22:33 crc kubenswrapper[4717]: E0218 12:22:33.037589 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:22:34 crc kubenswrapper[4717]: I0218 12:22:34.416875 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-586778dd75-mtms6" podUID="e21881f2-73fb-4d0f-974c-a74694a2b301" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 18 12:22:47 crc kubenswrapper[4717]: I0218 12:22:47.042923 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:22:47 crc kubenswrapper[4717]: I0218 12:22:47.567505 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"2cadc8b7b94fe4f5a4e6a22f9c6f9680d0439afb6d0ced72d939b5231b18d983"} Feb 18 12:22:47 crc kubenswrapper[4717]: I0218 12:22:47.593836 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" podStartSLOduration=43.125485097 podStartE2EDuration="43.59381126s" podCreationTimestamp="2026-02-18 12:22:04 +0000 UTC" firstStartedPulling="2026-02-18 12:22:05.792558729 +0000 UTC m=+1960.194660045" lastFinishedPulling="2026-02-18 12:22:06.260884892 +0000 UTC m=+1960.662986208" observedRunningTime="2026-02-18 12:22:06.691398555 +0000 UTC m=+1961.093499871" watchObservedRunningTime="2026-02-18 12:22:47.59381126 +0000 UTC m=+2001.995912576" Feb 18 12:23:00 crc kubenswrapper[4717]: I0218 12:23:00.710195 4717 generic.go:334] "Generic (PLEG): container finished" podID="5cdc94da-0dcb-40a3-9800-2655bae295cb" containerID="a3b5e74bb9949e03baf4af94b2befc19dfb066d1dd483902a74117045a77e29a" exitCode=0 Feb 18 12:23:00 crc kubenswrapper[4717]: I0218 12:23:00.710249 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" event={"ID":"5cdc94da-0dcb-40a3-9800-2655bae295cb","Type":"ContainerDied","Data":"a3b5e74bb9949e03baf4af94b2befc19dfb066d1dd483902a74117045a77e29a"} Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.158225 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.266783 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovncontroller-config-0\") pod \"5cdc94da-0dcb-40a3-9800-2655bae295cb\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.266966 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-inventory\") pod \"5cdc94da-0dcb-40a3-9800-2655bae295cb\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.267049 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovn-combined-ca-bundle\") pod \"5cdc94da-0dcb-40a3-9800-2655bae295cb\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.267078 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ssh-key-openstack-edpm-ipam\") pod \"5cdc94da-0dcb-40a3-9800-2655bae295cb\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.267116 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkjsw\" (UniqueName: \"kubernetes.io/projected/5cdc94da-0dcb-40a3-9800-2655bae295cb-kube-api-access-tkjsw\") pod \"5cdc94da-0dcb-40a3-9800-2655bae295cb\" (UID: \"5cdc94da-0dcb-40a3-9800-2655bae295cb\") " Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.272799 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cdc94da-0dcb-40a3-9800-2655bae295cb-kube-api-access-tkjsw" (OuterVolumeSpecName: "kube-api-access-tkjsw") pod "5cdc94da-0dcb-40a3-9800-2655bae295cb" (UID: "5cdc94da-0dcb-40a3-9800-2655bae295cb"). InnerVolumeSpecName "kube-api-access-tkjsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.273704 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5cdc94da-0dcb-40a3-9800-2655bae295cb" (UID: "5cdc94da-0dcb-40a3-9800-2655bae295cb"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.294862 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5cdc94da-0dcb-40a3-9800-2655bae295cb" (UID: "5cdc94da-0dcb-40a3-9800-2655bae295cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.297567 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-inventory" (OuterVolumeSpecName: "inventory") pod "5cdc94da-0dcb-40a3-9800-2655bae295cb" (UID: "5cdc94da-0dcb-40a3-9800-2655bae295cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.300083 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "5cdc94da-0dcb-40a3-9800-2655bae295cb" (UID: "5cdc94da-0dcb-40a3-9800-2655bae295cb"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.369627 4717 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.369683 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.369697 4717 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.369708 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cdc94da-0dcb-40a3-9800-2655bae295cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.369720 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkjsw\" (UniqueName: \"kubernetes.io/projected/5cdc94da-0dcb-40a3-9800-2655bae295cb-kube-api-access-tkjsw\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.729243 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" event={"ID":"5cdc94da-0dcb-40a3-9800-2655bae295cb","Type":"ContainerDied","Data":"3bc11603905e2cd8241b9a8fe48af732dd50222de919fa045fb0aa98cab152d8"} Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.729315 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc11603905e2cd8241b9a8fe48af732dd50222de919fa045fb0aa98cab152d8" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.729312 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pnbv9" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.855495 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6"] Feb 18 12:23:02 crc kubenswrapper[4717]: E0218 12:23:02.856301 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cdc94da-0dcb-40a3-9800-2655bae295cb" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.856323 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cdc94da-0dcb-40a3-9800-2655bae295cb" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.856512 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cdc94da-0dcb-40a3-9800-2655bae295cb" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.862703 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.866090 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.866703 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.866813 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.867212 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.867308 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.867538 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.867982 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6"] Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.880172 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5x8h\" (UniqueName: \"kubernetes.io/projected/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-kube-api-access-x5x8h\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.880236 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.880351 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.880384 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.880408 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.881013 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.983006 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.983325 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.983428 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.983617 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.983702 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5x8h\" (UniqueName: \"kubernetes.io/projected/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-kube-api-access-x5x8h\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.983773 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.987597 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.987597 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.987960 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.988434 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:02 crc kubenswrapper[4717]: I0218 12:23:02.990565 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:03 crc kubenswrapper[4717]: I0218 12:23:03.002745 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5x8h\" (UniqueName: \"kubernetes.io/projected/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-kube-api-access-x5x8h\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:03 crc kubenswrapper[4717]: I0218 12:23:03.188346 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:03 crc kubenswrapper[4717]: I0218 12:23:03.722295 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6"] Feb 18 12:23:03 crc kubenswrapper[4717]: I0218 12:23:03.740479 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" event={"ID":"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6","Type":"ContainerStarted","Data":"8df25a5bd4489b0f279bf17478678fdcbe16840ff3b3d8e1ac7de16201b5210c"} Feb 18 12:23:04 crc kubenswrapper[4717]: I0218 12:23:04.751768 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" event={"ID":"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6","Type":"ContainerStarted","Data":"0544edcec2278c5819b6068920f86d2029eb3c29d65d3a812d68f07c9219e583"} Feb 18 12:23:04 crc kubenswrapper[4717]: I0218 12:23:04.776710 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" podStartSLOduration=2.2265623469999998 podStartE2EDuration="2.776680675s" podCreationTimestamp="2026-02-18 12:23:02 +0000 UTC" firstStartedPulling="2026-02-18 12:23:03.72892584 +0000 UTC m=+2018.131027156" lastFinishedPulling="2026-02-18 12:23:04.279044168 +0000 UTC m=+2018.681145484" observedRunningTime="2026-02-18 12:23:04.771027802 +0000 UTC m=+2019.173129118" watchObservedRunningTime="2026-02-18 12:23:04.776680675 +0000 UTC m=+2019.178781991" Feb 18 12:23:47 crc kubenswrapper[4717]: I0218 12:23:47.160083 4717 generic.go:334] "Generic (PLEG): container finished" podID="3ef3746b-6714-4c50-b0fe-3d5d1632f6c6" containerID="0544edcec2278c5819b6068920f86d2029eb3c29d65d3a812d68f07c9219e583" exitCode=0 Feb 18 12:23:47 crc kubenswrapper[4717]: I0218 12:23:47.160189 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" event={"ID":"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6","Type":"ContainerDied","Data":"0544edcec2278c5819b6068920f86d2029eb3c29d65d3a812d68f07c9219e583"} Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.635302 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.768311 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-metadata-combined-ca-bundle\") pod \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.768552 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5x8h\" (UniqueName: \"kubernetes.io/projected/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-kube-api-access-x5x8h\") pod \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.768639 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-ssh-key-openstack-edpm-ipam\") pod \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.768672 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-inventory\") pod \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.768706 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.768730 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-nova-metadata-neutron-config-0\") pod \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\" (UID: \"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6\") " Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.776432 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6" (UID: "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.781946 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-kube-api-access-x5x8h" (OuterVolumeSpecName: "kube-api-access-x5x8h") pod "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6" (UID: "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6"). InnerVolumeSpecName "kube-api-access-x5x8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.803134 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6" (UID: "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.807851 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-inventory" (OuterVolumeSpecName: "inventory") pod "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6" (UID: "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.817068 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6" (UID: "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.817098 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6" (UID: "3ef3746b-6714-4c50-b0fe-3d5d1632f6c6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.872455 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5x8h\" (UniqueName: \"kubernetes.io/projected/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-kube-api-access-x5x8h\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.872613 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.872685 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.872782 4717 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.872861 4717 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:48 crc kubenswrapper[4717]: I0218 12:23:48.873043 4717 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef3746b-6714-4c50-b0fe-3d5d1632f6c6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.186658 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" event={"ID":"3ef3746b-6714-4c50-b0fe-3d5d1632f6c6","Type":"ContainerDied","Data":"8df25a5bd4489b0f279bf17478678fdcbe16840ff3b3d8e1ac7de16201b5210c"} Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.186711 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8df25a5bd4489b0f279bf17478678fdcbe16840ff3b3d8e1ac7de16201b5210c" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.186713 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.398141 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj"] Feb 18 12:23:49 crc kubenswrapper[4717]: E0218 12:23:49.398707 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef3746b-6714-4c50-b0fe-3d5d1632f6c6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.398729 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef3746b-6714-4c50-b0fe-3d5d1632f6c6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.398960 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef3746b-6714-4c50-b0fe-3d5d1632f6c6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.399851 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.402297 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.402506 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.403396 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.403631 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.405535 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.411055 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj"] Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.486768 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.486996 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkw76\" (UniqueName: \"kubernetes.io/projected/6e45806a-5dfc-4368-b276-a59ba198f17e-kube-api-access-tkw76\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.487279 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.487358 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.487462 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.589589 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkw76\" (UniqueName: \"kubernetes.io/projected/6e45806a-5dfc-4368-b276-a59ba198f17e-kube-api-access-tkw76\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.589702 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.589741 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.589793 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.589857 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.595701 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.595930 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.596115 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.597073 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.607812 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkw76\" (UniqueName: \"kubernetes.io/projected/6e45806a-5dfc-4368-b276-a59ba198f17e-kube-api-access-tkw76\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:49 crc kubenswrapper[4717]: I0218 12:23:49.724863 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:23:50 crc kubenswrapper[4717]: I0218 12:23:50.323722 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj"] Feb 18 12:23:51 crc kubenswrapper[4717]: I0218 12:23:51.204685 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" event={"ID":"6e45806a-5dfc-4368-b276-a59ba198f17e","Type":"ContainerStarted","Data":"05b6b8ae8e68e2853de57c5d1c415f4906d345c434093a150ae672ff96df1836"} Feb 18 12:23:51 crc kubenswrapper[4717]: I0218 12:23:51.205834 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" event={"ID":"6e45806a-5dfc-4368-b276-a59ba198f17e","Type":"ContainerStarted","Data":"1b0e8e413d4555751937c239d622fcc3eb2d707c4aa5c12786c9855478bcaea5"} Feb 18 12:23:51 crc kubenswrapper[4717]: I0218 12:23:51.227873 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" podStartSLOduration=1.659525854 podStartE2EDuration="2.227849487s" podCreationTimestamp="2026-02-18 12:23:49 +0000 UTC" firstStartedPulling="2026-02-18 12:23:50.33035685 +0000 UTC m=+2064.732458156" lastFinishedPulling="2026-02-18 12:23:50.898680473 +0000 UTC m=+2065.300781789" observedRunningTime="2026-02-18 12:23:51.221282618 +0000 UTC m=+2065.623383944" watchObservedRunningTime="2026-02-18 12:23:51.227849487 +0000 UTC m=+2065.629950803" Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.739308 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p668w"] Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.742701 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.751528 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p668w"] Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.871141 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-utilities\") pod \"community-operators-p668w\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.871238 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjst9\" (UniqueName: \"kubernetes.io/projected/a3288234-afb5-4750-a844-af58db55034b-kube-api-access-fjst9\") pod \"community-operators-p668w\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.871633 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-catalog-content\") pod \"community-operators-p668w\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.973521 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjst9\" (UniqueName: \"kubernetes.io/projected/a3288234-afb5-4750-a844-af58db55034b-kube-api-access-fjst9\") pod \"community-operators-p668w\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.973657 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-catalog-content\") pod \"community-operators-p668w\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.973740 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-utilities\") pod \"community-operators-p668w\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.974231 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-utilities\") pod \"community-operators-p668w\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:02 crc kubenswrapper[4717]: I0218 12:24:02.974278 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-catalog-content\") pod \"community-operators-p668w\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:03 crc kubenswrapper[4717]: I0218 12:24:03.000980 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjst9\" (UniqueName: \"kubernetes.io/projected/a3288234-afb5-4750-a844-af58db55034b-kube-api-access-fjst9\") pod \"community-operators-p668w\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:03 crc kubenswrapper[4717]: I0218 12:24:03.063896 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:03 crc kubenswrapper[4717]: I0218 12:24:03.707005 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p668w"] Feb 18 12:24:03 crc kubenswrapper[4717]: W0218 12:24:03.710653 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3288234_afb5_4750_a844_af58db55034b.slice/crio-4cd782abb91d3f3ab7dd395508e49242f350b49831924c72a784e73dfea7c61f WatchSource:0}: Error finding container 4cd782abb91d3f3ab7dd395508e49242f350b49831924c72a784e73dfea7c61f: Status 404 returned error can't find the container with id 4cd782abb91d3f3ab7dd395508e49242f350b49831924c72a784e73dfea7c61f Feb 18 12:24:04 crc kubenswrapper[4717]: I0218 12:24:04.343599 4717 generic.go:334] "Generic (PLEG): container finished" podID="a3288234-afb5-4750-a844-af58db55034b" containerID="792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0" exitCode=0 Feb 18 12:24:04 crc kubenswrapper[4717]: I0218 12:24:04.344100 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p668w" event={"ID":"a3288234-afb5-4750-a844-af58db55034b","Type":"ContainerDied","Data":"792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0"} Feb 18 12:24:04 crc kubenswrapper[4717]: I0218 12:24:04.344140 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p668w" event={"ID":"a3288234-afb5-4750-a844-af58db55034b","Type":"ContainerStarted","Data":"4cd782abb91d3f3ab7dd395508e49242f350b49831924c72a784e73dfea7c61f"} Feb 18 12:24:05 crc kubenswrapper[4717]: I0218 12:24:05.363543 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p668w" event={"ID":"a3288234-afb5-4750-a844-af58db55034b","Type":"ContainerStarted","Data":"fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb"} Feb 18 12:24:06 crc kubenswrapper[4717]: I0218 12:24:06.375486 4717 generic.go:334] "Generic (PLEG): container finished" podID="a3288234-afb5-4750-a844-af58db55034b" containerID="fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb" exitCode=0 Feb 18 12:24:06 crc kubenswrapper[4717]: I0218 12:24:06.375572 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p668w" event={"ID":"a3288234-afb5-4750-a844-af58db55034b","Type":"ContainerDied","Data":"fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb"} Feb 18 12:24:07 crc kubenswrapper[4717]: I0218 12:24:07.390885 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p668w" event={"ID":"a3288234-afb5-4750-a844-af58db55034b","Type":"ContainerStarted","Data":"49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be"} Feb 18 12:24:07 crc kubenswrapper[4717]: I0218 12:24:07.435664 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p668w" podStartSLOduration=2.9897385229999998 podStartE2EDuration="5.43563281s" podCreationTimestamp="2026-02-18 12:24:02 +0000 UTC" firstStartedPulling="2026-02-18 12:24:04.347109519 +0000 UTC m=+2078.749210845" lastFinishedPulling="2026-02-18 12:24:06.793003816 +0000 UTC m=+2081.195105132" observedRunningTime="2026-02-18 12:24:07.419902267 +0000 UTC m=+2081.822003583" watchObservedRunningTime="2026-02-18 12:24:07.43563281 +0000 UTC m=+2081.837734136" Feb 18 12:24:13 crc kubenswrapper[4717]: I0218 12:24:13.065437 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:13 crc kubenswrapper[4717]: I0218 12:24:13.067286 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:13 crc kubenswrapper[4717]: I0218 12:24:13.115311 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:13 crc kubenswrapper[4717]: I0218 12:24:13.487899 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:13 crc kubenswrapper[4717]: I0218 12:24:13.539909 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p668w"] Feb 18 12:24:15 crc kubenswrapper[4717]: I0218 12:24:15.464154 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p668w" podUID="a3288234-afb5-4750-a844-af58db55034b" containerName="registry-server" containerID="cri-o://49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be" gracePeriod=2 Feb 18 12:24:15 crc kubenswrapper[4717]: I0218 12:24:15.900594 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.073243 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjst9\" (UniqueName: \"kubernetes.io/projected/a3288234-afb5-4750-a844-af58db55034b-kube-api-access-fjst9\") pod \"a3288234-afb5-4750-a844-af58db55034b\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.073552 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-catalog-content\") pod \"a3288234-afb5-4750-a844-af58db55034b\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.073605 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-utilities\") pod \"a3288234-afb5-4750-a844-af58db55034b\" (UID: \"a3288234-afb5-4750-a844-af58db55034b\") " Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.074918 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-utilities" (OuterVolumeSpecName: "utilities") pod "a3288234-afb5-4750-a844-af58db55034b" (UID: "a3288234-afb5-4750-a844-af58db55034b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.084799 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3288234-afb5-4750-a844-af58db55034b-kube-api-access-fjst9" (OuterVolumeSpecName: "kube-api-access-fjst9") pod "a3288234-afb5-4750-a844-af58db55034b" (UID: "a3288234-afb5-4750-a844-af58db55034b"). InnerVolumeSpecName "kube-api-access-fjst9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.119007 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3288234-afb5-4750-a844-af58db55034b" (UID: "a3288234-afb5-4750-a844-af58db55034b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.176302 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjst9\" (UniqueName: \"kubernetes.io/projected/a3288234-afb5-4750-a844-af58db55034b-kube-api-access-fjst9\") on node \"crc\" DevicePath \"\"" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.176338 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.176348 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3288234-afb5-4750-a844-af58db55034b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.474240 4717 generic.go:334] "Generic (PLEG): container finished" podID="a3288234-afb5-4750-a844-af58db55034b" containerID="49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be" exitCode=0 Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.474305 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p668w" event={"ID":"a3288234-afb5-4750-a844-af58db55034b","Type":"ContainerDied","Data":"49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be"} Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.474350 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p668w" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.474366 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p668w" event={"ID":"a3288234-afb5-4750-a844-af58db55034b","Type":"ContainerDied","Data":"4cd782abb91d3f3ab7dd395508e49242f350b49831924c72a784e73dfea7c61f"} Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.474388 4717 scope.go:117] "RemoveContainer" containerID="49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.502343 4717 scope.go:117] "RemoveContainer" containerID="fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.507516 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p668w"] Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.517409 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p668w"] Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.527431 4717 scope.go:117] "RemoveContainer" containerID="792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.572753 4717 scope.go:117] "RemoveContainer" containerID="49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be" Feb 18 12:24:16 crc kubenswrapper[4717]: E0218 12:24:16.573399 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be\": container with ID starting with 49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be not found: ID does not exist" containerID="49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.573444 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be"} err="failed to get container status \"49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be\": rpc error: code = NotFound desc = could not find container \"49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be\": container with ID starting with 49c90856218f49204004b2b12347591eeafab5e659856a42b5f7f67765f510be not found: ID does not exist" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.573470 4717 scope.go:117] "RemoveContainer" containerID="fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb" Feb 18 12:24:16 crc kubenswrapper[4717]: E0218 12:24:16.573813 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb\": container with ID starting with fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb not found: ID does not exist" containerID="fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.573835 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb"} err="failed to get container status \"fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb\": rpc error: code = NotFound desc = could not find container \"fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb\": container with ID starting with fae8b81a38935c6ad24f876c29a8160f8f2396e10b4c6f25a39e231f47a900eb not found: ID does not exist" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.573849 4717 scope.go:117] "RemoveContainer" containerID="792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0" Feb 18 12:24:16 crc kubenswrapper[4717]: E0218 12:24:16.574102 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0\": container with ID starting with 792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0 not found: ID does not exist" containerID="792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0" Feb 18 12:24:16 crc kubenswrapper[4717]: I0218 12:24:16.574126 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0"} err="failed to get container status \"792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0\": rpc error: code = NotFound desc = could not find container \"792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0\": container with ID starting with 792c529127ec7eff4b717fe412caf336550086ce3bca118997ca85b2a634beb0 not found: ID does not exist" Feb 18 12:24:17 crc kubenswrapper[4717]: I0218 12:24:17.046821 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3288234-afb5-4750-a844-af58db55034b" path="/var/lib/kubelet/pods/a3288234-afb5-4750-a844-af58db55034b/volumes" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.702581 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9cm8n"] Feb 18 12:24:34 crc kubenswrapper[4717]: E0218 12:24:34.703788 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3288234-afb5-4750-a844-af58db55034b" containerName="extract-content" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.703806 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3288234-afb5-4750-a844-af58db55034b" containerName="extract-content" Feb 18 12:24:34 crc kubenswrapper[4717]: E0218 12:24:34.703831 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3288234-afb5-4750-a844-af58db55034b" containerName="registry-server" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.703842 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3288234-afb5-4750-a844-af58db55034b" containerName="registry-server" Feb 18 12:24:34 crc kubenswrapper[4717]: E0218 12:24:34.703860 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3288234-afb5-4750-a844-af58db55034b" containerName="extract-utilities" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.703869 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3288234-afb5-4750-a844-af58db55034b" containerName="extract-utilities" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.704145 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3288234-afb5-4750-a844-af58db55034b" containerName="registry-server" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.706117 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.729055 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9cm8n"] Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.784457 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmxw7\" (UniqueName: \"kubernetes.io/projected/7fb24fda-417d-4d7d-8848-5b90db35334d-kube-api-access-mmxw7\") pod \"certified-operators-9cm8n\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.784578 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-catalog-content\") pod \"certified-operators-9cm8n\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.784610 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-utilities\") pod \"certified-operators-9cm8n\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.886307 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmxw7\" (UniqueName: \"kubernetes.io/projected/7fb24fda-417d-4d7d-8848-5b90db35334d-kube-api-access-mmxw7\") pod \"certified-operators-9cm8n\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.886513 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-catalog-content\") pod \"certified-operators-9cm8n\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.886559 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-utilities\") pod \"certified-operators-9cm8n\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.886973 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-catalog-content\") pod \"certified-operators-9cm8n\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.887040 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-utilities\") pod \"certified-operators-9cm8n\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:34 crc kubenswrapper[4717]: I0218 12:24:34.907195 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmxw7\" (UniqueName: \"kubernetes.io/projected/7fb24fda-417d-4d7d-8848-5b90db35334d-kube-api-access-mmxw7\") pod \"certified-operators-9cm8n\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:35 crc kubenswrapper[4717]: I0218 12:24:35.038607 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:35 crc kubenswrapper[4717]: I0218 12:24:35.532935 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9cm8n"] Feb 18 12:24:35 crc kubenswrapper[4717]: W0218 12:24:35.537138 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fb24fda_417d_4d7d_8848_5b90db35334d.slice/crio-8e50b9c5a26f485b2107bb263f5d6dfbe677c080bfc8646dee5d9fd4f80b6876 WatchSource:0}: Error finding container 8e50b9c5a26f485b2107bb263f5d6dfbe677c080bfc8646dee5d9fd4f80b6876: Status 404 returned error can't find the container with id 8e50b9c5a26f485b2107bb263f5d6dfbe677c080bfc8646dee5d9fd4f80b6876 Feb 18 12:24:35 crc kubenswrapper[4717]: I0218 12:24:35.642376 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cm8n" event={"ID":"7fb24fda-417d-4d7d-8848-5b90db35334d","Type":"ContainerStarted","Data":"8e50b9c5a26f485b2107bb263f5d6dfbe677c080bfc8646dee5d9fd4f80b6876"} Feb 18 12:24:36 crc kubenswrapper[4717]: I0218 12:24:36.652751 4717 generic.go:334] "Generic (PLEG): container finished" podID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerID="41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b" exitCode=0 Feb 18 12:24:36 crc kubenswrapper[4717]: I0218 12:24:36.652843 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cm8n" event={"ID":"7fb24fda-417d-4d7d-8848-5b90db35334d","Type":"ContainerDied","Data":"41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b"} Feb 18 12:24:38 crc kubenswrapper[4717]: I0218 12:24:38.672516 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cm8n" event={"ID":"7fb24fda-417d-4d7d-8848-5b90db35334d","Type":"ContainerStarted","Data":"e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f"} Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.279882 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b58kx"] Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.285729 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.331585 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b58kx"] Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.387161 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rscjd\" (UniqueName: \"kubernetes.io/projected/3b543c09-54a0-4bc6-b21c-b1aa1806502f-kube-api-access-rscjd\") pod \"redhat-marketplace-b58kx\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.388992 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-utilities\") pod \"redhat-marketplace-b58kx\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.389167 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-catalog-content\") pod \"redhat-marketplace-b58kx\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.491079 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-utilities\") pod \"redhat-marketplace-b58kx\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.491165 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-catalog-content\") pod \"redhat-marketplace-b58kx\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.491202 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rscjd\" (UniqueName: \"kubernetes.io/projected/3b543c09-54a0-4bc6-b21c-b1aa1806502f-kube-api-access-rscjd\") pod \"redhat-marketplace-b58kx\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.491716 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-utilities\") pod \"redhat-marketplace-b58kx\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.492689 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-catalog-content\") pod \"redhat-marketplace-b58kx\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.515778 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rscjd\" (UniqueName: \"kubernetes.io/projected/3b543c09-54a0-4bc6-b21c-b1aa1806502f-kube-api-access-rscjd\") pod \"redhat-marketplace-b58kx\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.609665 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.708032 4717 generic.go:334] "Generic (PLEG): container finished" podID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerID="e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f" exitCode=0 Feb 18 12:24:39 crc kubenswrapper[4717]: I0218 12:24:39.708112 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cm8n" event={"ID":"7fb24fda-417d-4d7d-8848-5b90db35334d","Type":"ContainerDied","Data":"e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f"} Feb 18 12:24:40 crc kubenswrapper[4717]: I0218 12:24:40.163817 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b58kx"] Feb 18 12:24:40 crc kubenswrapper[4717]: W0218 12:24:40.166791 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b543c09_54a0_4bc6_b21c_b1aa1806502f.slice/crio-35f0bf6ae11ef7122d195474e1a6122b53985bb2e702325dace988a642039ffa WatchSource:0}: Error finding container 35f0bf6ae11ef7122d195474e1a6122b53985bb2e702325dace988a642039ffa: Status 404 returned error can't find the container with id 35f0bf6ae11ef7122d195474e1a6122b53985bb2e702325dace988a642039ffa Feb 18 12:24:40 crc kubenswrapper[4717]: I0218 12:24:40.718885 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cm8n" event={"ID":"7fb24fda-417d-4d7d-8848-5b90db35334d","Type":"ContainerStarted","Data":"17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f"} Feb 18 12:24:40 crc kubenswrapper[4717]: I0218 12:24:40.720715 4717 generic.go:334] "Generic (PLEG): container finished" podID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerID="70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4" exitCode=0 Feb 18 12:24:40 crc kubenswrapper[4717]: I0218 12:24:40.720754 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b58kx" event={"ID":"3b543c09-54a0-4bc6-b21c-b1aa1806502f","Type":"ContainerDied","Data":"70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4"} Feb 18 12:24:40 crc kubenswrapper[4717]: I0218 12:24:40.720775 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b58kx" event={"ID":"3b543c09-54a0-4bc6-b21c-b1aa1806502f","Type":"ContainerStarted","Data":"35f0bf6ae11ef7122d195474e1a6122b53985bb2e702325dace988a642039ffa"} Feb 18 12:24:40 crc kubenswrapper[4717]: I0218 12:24:40.747353 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9cm8n" podStartSLOduration=3.032501185 podStartE2EDuration="6.747323421s" podCreationTimestamp="2026-02-18 12:24:34 +0000 UTC" firstStartedPulling="2026-02-18 12:24:36.654708932 +0000 UTC m=+2111.056810268" lastFinishedPulling="2026-02-18 12:24:40.369531198 +0000 UTC m=+2114.771632504" observedRunningTime="2026-02-18 12:24:40.741663298 +0000 UTC m=+2115.143764624" watchObservedRunningTime="2026-02-18 12:24:40.747323421 +0000 UTC m=+2115.149424737" Feb 18 12:24:41 crc kubenswrapper[4717]: I0218 12:24:41.734985 4717 generic.go:334] "Generic (PLEG): container finished" podID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerID="6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d" exitCode=0 Feb 18 12:24:41 crc kubenswrapper[4717]: I0218 12:24:41.735205 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b58kx" event={"ID":"3b543c09-54a0-4bc6-b21c-b1aa1806502f","Type":"ContainerDied","Data":"6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d"} Feb 18 12:24:42 crc kubenswrapper[4717]: I0218 12:24:42.745507 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b58kx" event={"ID":"3b543c09-54a0-4bc6-b21c-b1aa1806502f","Type":"ContainerStarted","Data":"a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02"} Feb 18 12:24:42 crc kubenswrapper[4717]: I0218 12:24:42.768209 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b58kx" podStartSLOduration=2.308927321 podStartE2EDuration="3.768165873s" podCreationTimestamp="2026-02-18 12:24:39 +0000 UTC" firstStartedPulling="2026-02-18 12:24:40.722058853 +0000 UTC m=+2115.124160169" lastFinishedPulling="2026-02-18 12:24:42.181297405 +0000 UTC m=+2116.583398721" observedRunningTime="2026-02-18 12:24:42.76145543 +0000 UTC m=+2117.163556746" watchObservedRunningTime="2026-02-18 12:24:42.768165873 +0000 UTC m=+2117.170267189" Feb 18 12:24:45 crc kubenswrapper[4717]: I0218 12:24:45.048202 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:45 crc kubenswrapper[4717]: I0218 12:24:45.048529 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:45 crc kubenswrapper[4717]: I0218 12:24:45.095074 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:45 crc kubenswrapper[4717]: I0218 12:24:45.825816 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:46 crc kubenswrapper[4717]: I0218 12:24:46.281756 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9cm8n"] Feb 18 12:24:47 crc kubenswrapper[4717]: I0218 12:24:47.794783 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9cm8n" podUID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerName="registry-server" containerID="cri-o://17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f" gracePeriod=2 Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.301781 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.441203 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-utilities\") pod \"7fb24fda-417d-4d7d-8848-5b90db35334d\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.441440 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-catalog-content\") pod \"7fb24fda-417d-4d7d-8848-5b90db35334d\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.441516 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmxw7\" (UniqueName: \"kubernetes.io/projected/7fb24fda-417d-4d7d-8848-5b90db35334d-kube-api-access-mmxw7\") pod \"7fb24fda-417d-4d7d-8848-5b90db35334d\" (UID: \"7fb24fda-417d-4d7d-8848-5b90db35334d\") " Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.442037 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-utilities" (OuterVolumeSpecName: "utilities") pod "7fb24fda-417d-4d7d-8848-5b90db35334d" (UID: "7fb24fda-417d-4d7d-8848-5b90db35334d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.451674 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb24fda-417d-4d7d-8848-5b90db35334d-kube-api-access-mmxw7" (OuterVolumeSpecName: "kube-api-access-mmxw7") pod "7fb24fda-417d-4d7d-8848-5b90db35334d" (UID: "7fb24fda-417d-4d7d-8848-5b90db35334d"). InnerVolumeSpecName "kube-api-access-mmxw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.488864 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fb24fda-417d-4d7d-8848-5b90db35334d" (UID: "7fb24fda-417d-4d7d-8848-5b90db35334d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.543624 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.543721 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmxw7\" (UniqueName: \"kubernetes.io/projected/7fb24fda-417d-4d7d-8848-5b90db35334d-kube-api-access-mmxw7\") on node \"crc\" DevicePath \"\"" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.543736 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb24fda-417d-4d7d-8848-5b90db35334d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.805718 4717 generic.go:334] "Generic (PLEG): container finished" podID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerID="17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f" exitCode=0 Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.805769 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cm8n" event={"ID":"7fb24fda-417d-4d7d-8848-5b90db35334d","Type":"ContainerDied","Data":"17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f"} Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.805803 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cm8n" event={"ID":"7fb24fda-417d-4d7d-8848-5b90db35334d","Type":"ContainerDied","Data":"8e50b9c5a26f485b2107bb263f5d6dfbe677c080bfc8646dee5d9fd4f80b6876"} Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.805825 4717 scope.go:117] "RemoveContainer" containerID="17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.805831 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cm8n" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.827212 4717 scope.go:117] "RemoveContainer" containerID="e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.841764 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9cm8n"] Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.858175 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9cm8n"] Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.874371 4717 scope.go:117] "RemoveContainer" containerID="41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.906299 4717 scope.go:117] "RemoveContainer" containerID="17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f" Feb 18 12:24:48 crc kubenswrapper[4717]: E0218 12:24:48.906776 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f\": container with ID starting with 17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f not found: ID does not exist" containerID="17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.906829 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f"} err="failed to get container status \"17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f\": rpc error: code = NotFound desc = could not find container \"17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f\": container with ID starting with 17e4c3ff3645c42491d97e1576bc9015d65992bfa9bdf95ac0225ef4887ca48f not found: ID does not exist" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.906863 4717 scope.go:117] "RemoveContainer" containerID="e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f" Feb 18 12:24:48 crc kubenswrapper[4717]: E0218 12:24:48.907362 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f\": container with ID starting with e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f not found: ID does not exist" containerID="e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.907468 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f"} err="failed to get container status \"e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f\": rpc error: code = NotFound desc = could not find container \"e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f\": container with ID starting with e23ecc68db33ada51ba732fc2a51c5648e0b4aede69f922f1cd5cf54508e3c1f not found: ID does not exist" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.907552 4717 scope.go:117] "RemoveContainer" containerID="41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b" Feb 18 12:24:48 crc kubenswrapper[4717]: E0218 12:24:48.907995 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b\": container with ID starting with 41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b not found: ID does not exist" containerID="41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b" Feb 18 12:24:48 crc kubenswrapper[4717]: I0218 12:24:48.908118 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b"} err="failed to get container status \"41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b\": rpc error: code = NotFound desc = could not find container \"41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b\": container with ID starting with 41961816ffab8d7917aee9c43171a03786d34a975ebaf8eae9913112bef56f6b not found: ID does not exist" Feb 18 12:24:49 crc kubenswrapper[4717]: I0218 12:24:49.047803 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb24fda-417d-4d7d-8848-5b90db35334d" path="/var/lib/kubelet/pods/7fb24fda-417d-4d7d-8848-5b90db35334d/volumes" Feb 18 12:24:49 crc kubenswrapper[4717]: I0218 12:24:49.610644 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:49 crc kubenswrapper[4717]: I0218 12:24:49.610716 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:49 crc kubenswrapper[4717]: I0218 12:24:49.665023 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:49 crc kubenswrapper[4717]: I0218 12:24:49.856641 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:51 crc kubenswrapper[4717]: I0218 12:24:51.674772 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b58kx"] Feb 18 12:24:51 crc kubenswrapper[4717]: I0218 12:24:51.831554 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b58kx" podUID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerName="registry-server" containerID="cri-o://a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02" gracePeriod=2 Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.356672 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.455467 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rscjd\" (UniqueName: \"kubernetes.io/projected/3b543c09-54a0-4bc6-b21c-b1aa1806502f-kube-api-access-rscjd\") pod \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.455894 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-utilities\") pod \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.456061 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-catalog-content\") pod \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\" (UID: \"3b543c09-54a0-4bc6-b21c-b1aa1806502f\") " Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.456413 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-utilities" (OuterVolumeSpecName: "utilities") pod "3b543c09-54a0-4bc6-b21c-b1aa1806502f" (UID: "3b543c09-54a0-4bc6-b21c-b1aa1806502f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.459276 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.466489 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b543c09-54a0-4bc6-b21c-b1aa1806502f-kube-api-access-rscjd" (OuterVolumeSpecName: "kube-api-access-rscjd") pod "3b543c09-54a0-4bc6-b21c-b1aa1806502f" (UID: "3b543c09-54a0-4bc6-b21c-b1aa1806502f"). InnerVolumeSpecName "kube-api-access-rscjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.482726 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b543c09-54a0-4bc6-b21c-b1aa1806502f" (UID: "3b543c09-54a0-4bc6-b21c-b1aa1806502f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.561493 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rscjd\" (UniqueName: \"kubernetes.io/projected/3b543c09-54a0-4bc6-b21c-b1aa1806502f-kube-api-access-rscjd\") on node \"crc\" DevicePath \"\"" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.561556 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b543c09-54a0-4bc6-b21c-b1aa1806502f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.843285 4717 generic.go:334] "Generic (PLEG): container finished" podID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerID="a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02" exitCode=0 Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.843295 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b58kx" event={"ID":"3b543c09-54a0-4bc6-b21c-b1aa1806502f","Type":"ContainerDied","Data":"a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02"} Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.843345 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b58kx" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.843360 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b58kx" event={"ID":"3b543c09-54a0-4bc6-b21c-b1aa1806502f","Type":"ContainerDied","Data":"35f0bf6ae11ef7122d195474e1a6122b53985bb2e702325dace988a642039ffa"} Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.843384 4717 scope.go:117] "RemoveContainer" containerID="a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.864582 4717 scope.go:117] "RemoveContainer" containerID="6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.877855 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b58kx"] Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.886480 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b58kx"] Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.900121 4717 scope.go:117] "RemoveContainer" containerID="70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.942640 4717 scope.go:117] "RemoveContainer" containerID="a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02" Feb 18 12:24:52 crc kubenswrapper[4717]: E0218 12:24:52.943482 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02\": container with ID starting with a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02 not found: ID does not exist" containerID="a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.943519 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02"} err="failed to get container status \"a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02\": rpc error: code = NotFound desc = could not find container \"a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02\": container with ID starting with a51ec4e957e549346879d82c873b4c738cf9d8a3b63f204185e009888ddb6c02 not found: ID does not exist" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.943546 4717 scope.go:117] "RemoveContainer" containerID="6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d" Feb 18 12:24:52 crc kubenswrapper[4717]: E0218 12:24:52.944407 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d\": container with ID starting with 6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d not found: ID does not exist" containerID="6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.944434 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d"} err="failed to get container status \"6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d\": rpc error: code = NotFound desc = could not find container \"6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d\": container with ID starting with 6a3ce1220ad95541eaf1aa0ad02b27c774ab089deba82b17a767978c67caea8d not found: ID does not exist" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.944457 4717 scope.go:117] "RemoveContainer" containerID="70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4" Feb 18 12:24:52 crc kubenswrapper[4717]: E0218 12:24:52.945065 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4\": container with ID starting with 70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4 not found: ID does not exist" containerID="70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4" Feb 18 12:24:52 crc kubenswrapper[4717]: I0218 12:24:52.945087 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4"} err="failed to get container status \"70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4\": rpc error: code = NotFound desc = could not find container \"70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4\": container with ID starting with 70b68221912df3cfe804fe7c0d06fd8bf3c986968350980b427f4f017f1f18c4 not found: ID does not exist" Feb 18 12:24:53 crc kubenswrapper[4717]: I0218 12:24:53.047601 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" path="/var/lib/kubelet/pods/3b543c09-54a0-4bc6-b21c-b1aa1806502f/volumes" Feb 18 12:25:12 crc kubenswrapper[4717]: I0218 12:25:12.773226 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:25:12 crc kubenswrapper[4717]: I0218 12:25:12.773881 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:25:42 crc kubenswrapper[4717]: I0218 12:25:42.772751 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:25:42 crc kubenswrapper[4717]: I0218 12:25:42.773290 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.456669 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lg2qt"] Feb 18 12:25:44 crc kubenswrapper[4717]: E0218 12:25:44.459143 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerName="extract-utilities" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.459244 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerName="extract-utilities" Feb 18 12:25:44 crc kubenswrapper[4717]: E0218 12:25:44.459352 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerName="extract-content" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.459414 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerName="extract-content" Feb 18 12:25:44 crc kubenswrapper[4717]: E0218 12:25:44.459475 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerName="extract-content" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.459530 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerName="extract-content" Feb 18 12:25:44 crc kubenswrapper[4717]: E0218 12:25:44.459603 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerName="registry-server" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.459665 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerName="registry-server" Feb 18 12:25:44 crc kubenswrapper[4717]: E0218 12:25:44.459730 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerName="registry-server" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.459788 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerName="registry-server" Feb 18 12:25:44 crc kubenswrapper[4717]: E0218 12:25:44.459860 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerName="extract-utilities" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.459915 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerName="extract-utilities" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.460187 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb24fda-417d-4d7d-8848-5b90db35334d" containerName="registry-server" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.460287 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b543c09-54a0-4bc6-b21c-b1aa1806502f" containerName="registry-server" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.461861 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.467272 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lg2qt"] Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.557094 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wfsl\" (UniqueName: \"kubernetes.io/projected/10551835-544f-4833-b0f9-6574ccc3ba3a-kube-api-access-7wfsl\") pod \"redhat-operators-lg2qt\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.557190 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-catalog-content\") pod \"redhat-operators-lg2qt\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.557421 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-utilities\") pod \"redhat-operators-lg2qt\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.659971 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-utilities\") pod \"redhat-operators-lg2qt\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.660082 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-catalog-content\") pod \"redhat-operators-lg2qt\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.660103 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wfsl\" (UniqueName: \"kubernetes.io/projected/10551835-544f-4833-b0f9-6574ccc3ba3a-kube-api-access-7wfsl\") pod \"redhat-operators-lg2qt\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.660884 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-utilities\") pod \"redhat-operators-lg2qt\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.660913 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-catalog-content\") pod \"redhat-operators-lg2qt\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.689089 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wfsl\" (UniqueName: \"kubernetes.io/projected/10551835-544f-4833-b0f9-6574ccc3ba3a-kube-api-access-7wfsl\") pod \"redhat-operators-lg2qt\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:44 crc kubenswrapper[4717]: I0218 12:25:44.786109 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:45 crc kubenswrapper[4717]: I0218 12:25:45.434596 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lg2qt"] Feb 18 12:25:45 crc kubenswrapper[4717]: I0218 12:25:45.552502 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2qt" event={"ID":"10551835-544f-4833-b0f9-6574ccc3ba3a","Type":"ContainerStarted","Data":"01f12970ce346496cfd8e4ec8f1541072e6837ac8bff465b759c05b2a7937982"} Feb 18 12:25:46 crc kubenswrapper[4717]: I0218 12:25:46.564110 4717 generic.go:334] "Generic (PLEG): container finished" podID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerID="5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518" exitCode=0 Feb 18 12:25:46 crc kubenswrapper[4717]: I0218 12:25:46.564233 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2qt" event={"ID":"10551835-544f-4833-b0f9-6574ccc3ba3a","Type":"ContainerDied","Data":"5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518"} Feb 18 12:25:48 crc kubenswrapper[4717]: I0218 12:25:48.583975 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2qt" event={"ID":"10551835-544f-4833-b0f9-6574ccc3ba3a","Type":"ContainerStarted","Data":"3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8"} Feb 18 12:25:50 crc kubenswrapper[4717]: I0218 12:25:50.603155 4717 generic.go:334] "Generic (PLEG): container finished" podID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerID="3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8" exitCode=0 Feb 18 12:25:50 crc kubenswrapper[4717]: I0218 12:25:50.603218 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2qt" event={"ID":"10551835-544f-4833-b0f9-6574ccc3ba3a","Type":"ContainerDied","Data":"3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8"} Feb 18 12:25:52 crc kubenswrapper[4717]: I0218 12:25:52.623665 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2qt" event={"ID":"10551835-544f-4833-b0f9-6574ccc3ba3a","Type":"ContainerStarted","Data":"4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9"} Feb 18 12:25:52 crc kubenswrapper[4717]: I0218 12:25:52.644110 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lg2qt" podStartSLOduration=3.789463677 podStartE2EDuration="8.644086371s" podCreationTimestamp="2026-02-18 12:25:44 +0000 UTC" firstStartedPulling="2026-02-18 12:25:46.566443722 +0000 UTC m=+2180.968545028" lastFinishedPulling="2026-02-18 12:25:51.421066406 +0000 UTC m=+2185.823167722" observedRunningTime="2026-02-18 12:25:52.641457075 +0000 UTC m=+2187.043558401" watchObservedRunningTime="2026-02-18 12:25:52.644086371 +0000 UTC m=+2187.046187687" Feb 18 12:25:54 crc kubenswrapper[4717]: I0218 12:25:54.786937 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:54 crc kubenswrapper[4717]: I0218 12:25:54.787349 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:25:55 crc kubenswrapper[4717]: I0218 12:25:55.833693 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lg2qt" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerName="registry-server" probeResult="failure" output=< Feb 18 12:25:55 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 12:25:55 crc kubenswrapper[4717]: > Feb 18 12:26:05 crc kubenswrapper[4717]: I0218 12:26:05.925897 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lg2qt" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerName="registry-server" probeResult="failure" output=< Feb 18 12:26:05 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 12:26:05 crc kubenswrapper[4717]: > Feb 18 12:26:12 crc kubenswrapper[4717]: I0218 12:26:12.772718 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:26:12 crc kubenswrapper[4717]: I0218 12:26:12.773470 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:26:12 crc kubenswrapper[4717]: I0218 12:26:12.773551 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:26:12 crc kubenswrapper[4717]: I0218 12:26:12.774758 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cadc8b7b94fe4f5a4e6a22f9c6f9680d0439afb6d0ced72d939b5231b18d983"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:26:12 crc kubenswrapper[4717]: I0218 12:26:12.774883 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://2cadc8b7b94fe4f5a4e6a22f9c6f9680d0439afb6d0ced72d939b5231b18d983" gracePeriod=600 Feb 18 12:26:13 crc kubenswrapper[4717]: I0218 12:26:13.820949 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="2cadc8b7b94fe4f5a4e6a22f9c6f9680d0439afb6d0ced72d939b5231b18d983" exitCode=0 Feb 18 12:26:13 crc kubenswrapper[4717]: I0218 12:26:13.821593 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"2cadc8b7b94fe4f5a4e6a22f9c6f9680d0439afb6d0ced72d939b5231b18d983"} Feb 18 12:26:13 crc kubenswrapper[4717]: I0218 12:26:13.821635 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931"} Feb 18 12:26:13 crc kubenswrapper[4717]: I0218 12:26:13.821655 4717 scope.go:117] "RemoveContainer" containerID="53922b9ff3749361f074e3d882f5752571c7e58c43c2bb3d196d68edaf84456c" Feb 18 12:26:14 crc kubenswrapper[4717]: I0218 12:26:14.834825 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:26:14 crc kubenswrapper[4717]: I0218 12:26:14.884730 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:26:15 crc kubenswrapper[4717]: I0218 12:26:15.662209 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lg2qt"] Feb 18 12:26:16 crc kubenswrapper[4717]: I0218 12:26:16.865263 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lg2qt" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerName="registry-server" containerID="cri-o://4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9" gracePeriod=2 Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.324205 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.451840 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-catalog-content\") pod \"10551835-544f-4833-b0f9-6574ccc3ba3a\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.451935 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-utilities\") pod \"10551835-544f-4833-b0f9-6574ccc3ba3a\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.451966 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wfsl\" (UniqueName: \"kubernetes.io/projected/10551835-544f-4833-b0f9-6574ccc3ba3a-kube-api-access-7wfsl\") pod \"10551835-544f-4833-b0f9-6574ccc3ba3a\" (UID: \"10551835-544f-4833-b0f9-6574ccc3ba3a\") " Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.453368 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-utilities" (OuterVolumeSpecName: "utilities") pod "10551835-544f-4833-b0f9-6574ccc3ba3a" (UID: "10551835-544f-4833-b0f9-6574ccc3ba3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.458517 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10551835-544f-4833-b0f9-6574ccc3ba3a-kube-api-access-7wfsl" (OuterVolumeSpecName: "kube-api-access-7wfsl") pod "10551835-544f-4833-b0f9-6574ccc3ba3a" (UID: "10551835-544f-4833-b0f9-6574ccc3ba3a"). InnerVolumeSpecName "kube-api-access-7wfsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.554660 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.554705 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wfsl\" (UniqueName: \"kubernetes.io/projected/10551835-544f-4833-b0f9-6574ccc3ba3a-kube-api-access-7wfsl\") on node \"crc\" DevicePath \"\"" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.589816 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10551835-544f-4833-b0f9-6574ccc3ba3a" (UID: "10551835-544f-4833-b0f9-6574ccc3ba3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.656887 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10551835-544f-4833-b0f9-6574ccc3ba3a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.875759 4717 generic.go:334] "Generic (PLEG): container finished" podID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerID="4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9" exitCode=0 Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.875813 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2qt" event={"ID":"10551835-544f-4833-b0f9-6574ccc3ba3a","Type":"ContainerDied","Data":"4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9"} Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.876116 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg2qt" event={"ID":"10551835-544f-4833-b0f9-6574ccc3ba3a","Type":"ContainerDied","Data":"01f12970ce346496cfd8e4ec8f1541072e6837ac8bff465b759c05b2a7937982"} Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.876143 4717 scope.go:117] "RemoveContainer" containerID="4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.875871 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg2qt" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.911092 4717 scope.go:117] "RemoveContainer" containerID="3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.917077 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lg2qt"] Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.925709 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lg2qt"] Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.934303 4717 scope.go:117] "RemoveContainer" containerID="5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.980750 4717 scope.go:117] "RemoveContainer" containerID="4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9" Feb 18 12:26:17 crc kubenswrapper[4717]: E0218 12:26:17.981336 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9\": container with ID starting with 4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9 not found: ID does not exist" containerID="4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.981380 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9"} err="failed to get container status \"4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9\": rpc error: code = NotFound desc = could not find container \"4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9\": container with ID starting with 4587218d51cb097a914e2a8b06084ada78bdad5e7947267a231e3a202bdfd8d9 not found: ID does not exist" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.981410 4717 scope.go:117] "RemoveContainer" containerID="3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8" Feb 18 12:26:17 crc kubenswrapper[4717]: E0218 12:26:17.981926 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8\": container with ID starting with 3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8 not found: ID does not exist" containerID="3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.981960 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8"} err="failed to get container status \"3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8\": rpc error: code = NotFound desc = could not find container \"3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8\": container with ID starting with 3f871e42c63b4f5b5acebbd392786b01d0b90f7e14fe885b80983c2a457faca8 not found: ID does not exist" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.981982 4717 scope.go:117] "RemoveContainer" containerID="5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518" Feb 18 12:26:17 crc kubenswrapper[4717]: E0218 12:26:17.982416 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518\": container with ID starting with 5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518 not found: ID does not exist" containerID="5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518" Feb 18 12:26:17 crc kubenswrapper[4717]: I0218 12:26:17.982461 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518"} err="failed to get container status \"5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518\": rpc error: code = NotFound desc = could not find container \"5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518\": container with ID starting with 5ed7adff2e9389c6e30bc371ed7d52d0a199f923b17416e06e2726aeb4b2f518 not found: ID does not exist" Feb 18 12:26:19 crc kubenswrapper[4717]: I0218 12:26:19.046922 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" path="/var/lib/kubelet/pods/10551835-544f-4833-b0f9-6574ccc3ba3a/volumes" Feb 18 12:27:20 crc kubenswrapper[4717]: I0218 12:27:20.640343 4717 generic.go:334] "Generic (PLEG): container finished" podID="6e45806a-5dfc-4368-b276-a59ba198f17e" containerID="05b6b8ae8e68e2853de57c5d1c415f4906d345c434093a150ae672ff96df1836" exitCode=0 Feb 18 12:27:20 crc kubenswrapper[4717]: I0218 12:27:20.640843 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" event={"ID":"6e45806a-5dfc-4368-b276-a59ba198f17e","Type":"ContainerDied","Data":"05b6b8ae8e68e2853de57c5d1c415f4906d345c434093a150ae672ff96df1836"} Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.117876 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.259959 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-secret-0\") pod \"6e45806a-5dfc-4368-b276-a59ba198f17e\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.260074 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-ssh-key-openstack-edpm-ipam\") pod \"6e45806a-5dfc-4368-b276-a59ba198f17e\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.260135 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkw76\" (UniqueName: \"kubernetes.io/projected/6e45806a-5dfc-4368-b276-a59ba198f17e-kube-api-access-tkw76\") pod \"6e45806a-5dfc-4368-b276-a59ba198f17e\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.260155 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-inventory\") pod \"6e45806a-5dfc-4368-b276-a59ba198f17e\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.260187 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-combined-ca-bundle\") pod \"6e45806a-5dfc-4368-b276-a59ba198f17e\" (UID: \"6e45806a-5dfc-4368-b276-a59ba198f17e\") " Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.266387 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6e45806a-5dfc-4368-b276-a59ba198f17e" (UID: "6e45806a-5dfc-4368-b276-a59ba198f17e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.267158 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e45806a-5dfc-4368-b276-a59ba198f17e-kube-api-access-tkw76" (OuterVolumeSpecName: "kube-api-access-tkw76") pod "6e45806a-5dfc-4368-b276-a59ba198f17e" (UID: "6e45806a-5dfc-4368-b276-a59ba198f17e"). InnerVolumeSpecName "kube-api-access-tkw76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.289657 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-inventory" (OuterVolumeSpecName: "inventory") pod "6e45806a-5dfc-4368-b276-a59ba198f17e" (UID: "6e45806a-5dfc-4368-b276-a59ba198f17e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.289724 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "6e45806a-5dfc-4368-b276-a59ba198f17e" (UID: "6e45806a-5dfc-4368-b276-a59ba198f17e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.291751 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6e45806a-5dfc-4368-b276-a59ba198f17e" (UID: "6e45806a-5dfc-4368-b276-a59ba198f17e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.362776 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkw76\" (UniqueName: \"kubernetes.io/projected/6e45806a-5dfc-4368-b276-a59ba198f17e-kube-api-access-tkw76\") on node \"crc\" DevicePath \"\"" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.362823 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.362835 4717 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.362852 4717 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.362899 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e45806a-5dfc-4368-b276-a59ba198f17e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.661250 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" event={"ID":"6e45806a-5dfc-4368-b276-a59ba198f17e","Type":"ContainerDied","Data":"1b0e8e413d4555751937c239d622fcc3eb2d707c4aa5c12786c9855478bcaea5"} Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.661333 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b0e8e413d4555751937c239d622fcc3eb2d707c4aa5c12786c9855478bcaea5" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.661438 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.757188 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc"] Feb 18 12:27:22 crc kubenswrapper[4717]: E0218 12:27:22.757652 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerName="registry-server" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.757666 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerName="registry-server" Feb 18 12:27:22 crc kubenswrapper[4717]: E0218 12:27:22.757695 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e45806a-5dfc-4368-b276-a59ba198f17e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.757710 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e45806a-5dfc-4368-b276-a59ba198f17e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 12:27:22 crc kubenswrapper[4717]: E0218 12:27:22.757729 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerName="extract-content" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.757734 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerName="extract-content" Feb 18 12:27:22 crc kubenswrapper[4717]: E0218 12:27:22.757752 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerName="extract-utilities" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.757758 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerName="extract-utilities" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.757963 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e45806a-5dfc-4368-b276-a59ba198f17e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.757983 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="10551835-544f-4833-b0f9-6574ccc3ba3a" containerName="registry-server" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.758698 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.761226 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.761733 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.761848 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.761995 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.762075 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.763885 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.764094 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.777908 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc"] Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.874530 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.874637 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8sr\" (UniqueName: \"kubernetes.io/projected/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-kube-api-access-rt8sr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.874677 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.874705 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.874911 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.874949 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.874991 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.875104 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.875180 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.875202 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.875224 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.976622 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.976693 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.976722 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.976746 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.976764 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.976795 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.976857 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8sr\" (UniqueName: \"kubernetes.io/projected/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-kube-api-access-rt8sr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.976899 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.976933 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.976999 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.977030 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.978022 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.982136 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.982396 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.983355 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.983877 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.983916 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.984276 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.984351 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.984644 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.986136 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:22 crc kubenswrapper[4717]: I0218 12:27:22.996378 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8sr\" (UniqueName: \"kubernetes.io/projected/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-kube-api-access-rt8sr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cgnxc\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:23 crc kubenswrapper[4717]: I0218 12:27:23.082429 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:27:23 crc kubenswrapper[4717]: I0218 12:27:23.618730 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc"] Feb 18 12:27:23 crc kubenswrapper[4717]: I0218 12:27:23.624212 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:27:23 crc kubenswrapper[4717]: I0218 12:27:23.670896 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" event={"ID":"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3","Type":"ContainerStarted","Data":"22a8f8bfdb369b7fe4accd19cbd8eb61a24ef6d8c6089602383eeb85149152d6"} Feb 18 12:27:24 crc kubenswrapper[4717]: I0218 12:27:24.680295 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" event={"ID":"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3","Type":"ContainerStarted","Data":"3d5f347a5159a66b8d57248b0956bc7e1438d6b5bad7bb6ff14da16331bab552"} Feb 18 12:27:24 crc kubenswrapper[4717]: I0218 12:27:24.708422 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" podStartSLOduration=2.105259123 podStartE2EDuration="2.70839954s" podCreationTimestamp="2026-02-18 12:27:22 +0000 UTC" firstStartedPulling="2026-02-18 12:27:23.624005322 +0000 UTC m=+2278.026106638" lastFinishedPulling="2026-02-18 12:27:24.227145739 +0000 UTC m=+2278.629247055" observedRunningTime="2026-02-18 12:27:24.703907071 +0000 UTC m=+2279.106008387" watchObservedRunningTime="2026-02-18 12:27:24.70839954 +0000 UTC m=+2279.110500866" Feb 18 12:28:42 crc kubenswrapper[4717]: I0218 12:28:42.772875 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:28:42 crc kubenswrapper[4717]: I0218 12:28:42.773305 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:29:12 crc kubenswrapper[4717]: I0218 12:29:12.772721 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:29:12 crc kubenswrapper[4717]: I0218 12:29:12.773345 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:29:29 crc kubenswrapper[4717]: I0218 12:29:29.905170 4717 generic.go:334] "Generic (PLEG): container finished" podID="2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" containerID="3d5f347a5159a66b8d57248b0956bc7e1438d6b5bad7bb6ff14da16331bab552" exitCode=0 Feb 18 12:29:29 crc kubenswrapper[4717]: I0218 12:29:29.905219 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" event={"ID":"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3","Type":"ContainerDied","Data":"3d5f347a5159a66b8d57248b0956bc7e1438d6b5bad7bb6ff14da16331bab552"} Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.293704 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.441992 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-1\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.442066 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-extra-config-0\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.442167 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-combined-ca-bundle\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.442210 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-inventory\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.442393 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-ssh-key-openstack-edpm-ipam\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.442449 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-1\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.442576 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt8sr\" (UniqueName: \"kubernetes.io/projected/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-kube-api-access-rt8sr\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.442606 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-2\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.442660 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-0\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.442752 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-0\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.442786 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-3\") pod \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\" (UID: \"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3\") " Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.449072 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-kube-api-access-rt8sr" (OuterVolumeSpecName: "kube-api-access-rt8sr") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "kube-api-access-rt8sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.455399 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.472286 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.484763 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.485767 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.489634 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.493078 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-inventory" (OuterVolumeSpecName: "inventory") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.493719 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.493619 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.497468 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.516532 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" (UID: "2facfe0c-cdcb-44fb-ab60-197e5cf58fb3"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546118 4717 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546177 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546193 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546207 4717 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546221 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt8sr\" (UniqueName: \"kubernetes.io/projected/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-kube-api-access-rt8sr\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546233 4717 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546245 4717 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546281 4717 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546300 4717 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546314 4717 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.546327 4717 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2facfe0c-cdcb-44fb-ab60-197e5cf58fb3-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.921811 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" event={"ID":"2facfe0c-cdcb-44fb-ab60-197e5cf58fb3","Type":"ContainerDied","Data":"22a8f8bfdb369b7fe4accd19cbd8eb61a24ef6d8c6089602383eeb85149152d6"} Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.921862 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22a8f8bfdb369b7fe4accd19cbd8eb61a24ef6d8c6089602383eeb85149152d6" Feb 18 12:29:31 crc kubenswrapper[4717]: I0218 12:29:31.921901 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cgnxc" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.026544 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv"] Feb 18 12:29:32 crc kubenswrapper[4717]: E0218 12:29:32.027015 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.027037 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.027380 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="2facfe0c-cdcb-44fb-ab60-197e5cf58fb3" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.028201 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.032405 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.032662 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xk6fx" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.032773 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.033382 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.035160 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.036991 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv"] Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.160104 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.160657 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.161082 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.161475 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.161846 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6szs7\" (UniqueName: \"kubernetes.io/projected/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-kube-api-access-6szs7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.162105 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.162320 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.264627 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.264788 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.264861 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6szs7\" (UniqueName: \"kubernetes.io/projected/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-kube-api-access-6szs7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.264925 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.264969 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.265079 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.265747 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.269788 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.270192 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.286915 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.287112 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.287162 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.288825 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.290048 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6szs7\" (UniqueName: \"kubernetes.io/projected/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-kube-api-access-6szs7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.344174 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.866237 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv"] Feb 18 12:29:32 crc kubenswrapper[4717]: I0218 12:29:32.933066 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" event={"ID":"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963","Type":"ContainerStarted","Data":"b6bc26d2f7fd4e0ffc975532c131015a8f2d4082d082b415900addc54204ee26"} Feb 18 12:29:33 crc kubenswrapper[4717]: I0218 12:29:33.949575 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" event={"ID":"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963","Type":"ContainerStarted","Data":"9b2d8c559d34249d391de6b0a346338c76b83f6c6c8ccb8258241622485044cc"} Feb 18 12:29:33 crc kubenswrapper[4717]: I0218 12:29:33.977177 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" podStartSLOduration=1.444256185 podStartE2EDuration="1.977150436s" podCreationTimestamp="2026-02-18 12:29:32 +0000 UTC" firstStartedPulling="2026-02-18 12:29:32.863156634 +0000 UTC m=+2407.265257950" lastFinishedPulling="2026-02-18 12:29:33.396050895 +0000 UTC m=+2407.798152201" observedRunningTime="2026-02-18 12:29:33.970090643 +0000 UTC m=+2408.372191959" watchObservedRunningTime="2026-02-18 12:29:33.977150436 +0000 UTC m=+2408.379251752" Feb 18 12:29:42 crc kubenswrapper[4717]: I0218 12:29:42.772847 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:29:42 crc kubenswrapper[4717]: I0218 12:29:42.773544 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:29:42 crc kubenswrapper[4717]: I0218 12:29:42.773598 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:29:42 crc kubenswrapper[4717]: I0218 12:29:42.774568 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:29:42 crc kubenswrapper[4717]: I0218 12:29:42.774629 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" gracePeriod=600 Feb 18 12:29:42 crc kubenswrapper[4717]: E0218 12:29:42.898384 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:29:43 crc kubenswrapper[4717]: I0218 12:29:43.068578 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" exitCode=0 Feb 18 12:29:43 crc kubenswrapper[4717]: I0218 12:29:43.068640 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931"} Feb 18 12:29:43 crc kubenswrapper[4717]: I0218 12:29:43.068687 4717 scope.go:117] "RemoveContainer" containerID="2cadc8b7b94fe4f5a4e6a22f9c6f9680d0439afb6d0ced72d939b5231b18d983" Feb 18 12:29:43 crc kubenswrapper[4717]: I0218 12:29:43.069623 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:29:43 crc kubenswrapper[4717]: E0218 12:29:43.069979 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:29:57 crc kubenswrapper[4717]: I0218 12:29:57.043738 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:29:57 crc kubenswrapper[4717]: E0218 12:29:57.044410 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.158390 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf"] Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.164202 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.167633 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.167955 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.180244 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf"] Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.268874 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-config-volume\") pod \"collect-profiles-29523630-kfqlf\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.268954 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdfgz\" (UniqueName: \"kubernetes.io/projected/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-kube-api-access-xdfgz\") pod \"collect-profiles-29523630-kfqlf\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.269758 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-secret-volume\") pod \"collect-profiles-29523630-kfqlf\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.372547 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-secret-volume\") pod \"collect-profiles-29523630-kfqlf\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.372648 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-config-volume\") pod \"collect-profiles-29523630-kfqlf\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.372678 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdfgz\" (UniqueName: \"kubernetes.io/projected/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-kube-api-access-xdfgz\") pod \"collect-profiles-29523630-kfqlf\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.373735 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-config-volume\") pod \"collect-profiles-29523630-kfqlf\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.379699 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-secret-volume\") pod \"collect-profiles-29523630-kfqlf\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.394839 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdfgz\" (UniqueName: \"kubernetes.io/projected/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-kube-api-access-xdfgz\") pod \"collect-profiles-29523630-kfqlf\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.500926 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:00 crc kubenswrapper[4717]: I0218 12:30:00.965188 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf"] Feb 18 12:30:01 crc kubenswrapper[4717]: I0218 12:30:01.234917 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" event={"ID":"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5","Type":"ContainerStarted","Data":"7476f1b732269ad0389bd2c98bf5c93cc5e7fa5026442b7a56c0d3012f75937f"} Feb 18 12:30:01 crc kubenswrapper[4717]: I0218 12:30:01.235338 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" event={"ID":"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5","Type":"ContainerStarted","Data":"1ad1494fec69c305e89850a26d0e626ad67d5bb51206137fd22688b6b47265f7"} Feb 18 12:30:01 crc kubenswrapper[4717]: I0218 12:30:01.264020 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" podStartSLOduration=1.2639990669999999 podStartE2EDuration="1.263999067s" podCreationTimestamp="2026-02-18 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:30:01.262843493 +0000 UTC m=+2435.664944809" watchObservedRunningTime="2026-02-18 12:30:01.263999067 +0000 UTC m=+2435.666100383" Feb 18 12:30:02 crc kubenswrapper[4717]: I0218 12:30:02.246755 4717 generic.go:334] "Generic (PLEG): container finished" podID="b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5" containerID="7476f1b732269ad0389bd2c98bf5c93cc5e7fa5026442b7a56c0d3012f75937f" exitCode=0 Feb 18 12:30:02 crc kubenswrapper[4717]: I0218 12:30:02.246873 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" event={"ID":"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5","Type":"ContainerDied","Data":"7476f1b732269ad0389bd2c98bf5c93cc5e7fa5026442b7a56c0d3012f75937f"} Feb 18 12:30:03 crc kubenswrapper[4717]: I0218 12:30:03.554210 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:03 crc kubenswrapper[4717]: I0218 12:30:03.643152 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdfgz\" (UniqueName: \"kubernetes.io/projected/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-kube-api-access-xdfgz\") pod \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " Feb 18 12:30:03 crc kubenswrapper[4717]: I0218 12:30:03.643244 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-secret-volume\") pod \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " Feb 18 12:30:03 crc kubenswrapper[4717]: I0218 12:30:03.643315 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-config-volume\") pod \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\" (UID: \"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5\") " Feb 18 12:30:03 crc kubenswrapper[4717]: I0218 12:30:03.644006 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-config-volume" (OuterVolumeSpecName: "config-volume") pod "b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5" (UID: "b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:30:03 crc kubenswrapper[4717]: I0218 12:30:03.649511 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5" (UID: "b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:30:03 crc kubenswrapper[4717]: I0218 12:30:03.660479 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-kube-api-access-xdfgz" (OuterVolumeSpecName: "kube-api-access-xdfgz") pod "b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5" (UID: "b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5"). InnerVolumeSpecName "kube-api-access-xdfgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:30:03 crc kubenswrapper[4717]: I0218 12:30:03.745637 4717 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:30:03 crc kubenswrapper[4717]: I0218 12:30:03.745681 4717 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:30:03 crc kubenswrapper[4717]: I0218 12:30:03.745692 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdfgz\" (UniqueName: \"kubernetes.io/projected/b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5-kube-api-access-xdfgz\") on node \"crc\" DevicePath \"\"" Feb 18 12:30:04 crc kubenswrapper[4717]: I0218 12:30:04.267605 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" event={"ID":"b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5","Type":"ContainerDied","Data":"1ad1494fec69c305e89850a26d0e626ad67d5bb51206137fd22688b6b47265f7"} Feb 18 12:30:04 crc kubenswrapper[4717]: I0218 12:30:04.267651 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad1494fec69c305e89850a26d0e626ad67d5bb51206137fd22688b6b47265f7" Feb 18 12:30:04 crc kubenswrapper[4717]: I0218 12:30:04.267654 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523630-kfqlf" Feb 18 12:30:04 crc kubenswrapper[4717]: I0218 12:30:04.628909 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2"] Feb 18 12:30:04 crc kubenswrapper[4717]: I0218 12:30:04.639001 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523585-62qk2"] Feb 18 12:30:05 crc kubenswrapper[4717]: I0218 12:30:05.049047 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2e253c-448f-448b-8419-b898112f632c" path="/var/lib/kubelet/pods/0c2e253c-448f-448b-8419-b898112f632c/volumes" Feb 18 12:30:09 crc kubenswrapper[4717]: I0218 12:30:09.036863 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:30:09 crc kubenswrapper[4717]: E0218 12:30:09.037684 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:30:20 crc kubenswrapper[4717]: I0218 12:30:20.037161 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:30:20 crc kubenswrapper[4717]: E0218 12:30:20.037942 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:30:34 crc kubenswrapper[4717]: I0218 12:30:34.036951 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:30:34 crc kubenswrapper[4717]: E0218 12:30:34.037721 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:30:41 crc kubenswrapper[4717]: I0218 12:30:41.028982 4717 scope.go:117] "RemoveContainer" containerID="f152317a5af14b200a958b107bb5019473a6a9c8b153a51f0a9ff35d59d35a92" Feb 18 12:30:47 crc kubenswrapper[4717]: I0218 12:30:47.042317 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:30:47 crc kubenswrapper[4717]: E0218 12:30:47.042995 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:30:58 crc kubenswrapper[4717]: I0218 12:30:58.036622 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:30:58 crc kubenswrapper[4717]: E0218 12:30:58.037434 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:31:13 crc kubenswrapper[4717]: I0218 12:31:13.036742 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:31:13 crc kubenswrapper[4717]: E0218 12:31:13.037607 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:31:25 crc kubenswrapper[4717]: I0218 12:31:25.037075 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:31:25 crc kubenswrapper[4717]: E0218 12:31:25.038067 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:31:34 crc kubenswrapper[4717]: I0218 12:31:34.149480 4717 generic.go:334] "Generic (PLEG): container finished" podID="95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" containerID="9b2d8c559d34249d391de6b0a346338c76b83f6c6c8ccb8258241622485044cc" exitCode=0 Feb 18 12:31:34 crc kubenswrapper[4717]: I0218 12:31:34.149576 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" event={"ID":"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963","Type":"ContainerDied","Data":"9b2d8c559d34249d391de6b0a346338c76b83f6c6c8ccb8258241622485044cc"} Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.618090 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.739521 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6szs7\" (UniqueName: \"kubernetes.io/projected/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-kube-api-access-6szs7\") pod \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.739560 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-0\") pod \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.739676 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ssh-key-openstack-edpm-ipam\") pod \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.739791 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-2\") pod \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.739909 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-1\") pod \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.739933 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-telemetry-combined-ca-bundle\") pod \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.739961 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-inventory\") pod \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\" (UID: \"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963\") " Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.746478 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-kube-api-access-6szs7" (OuterVolumeSpecName: "kube-api-access-6szs7") pod "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" (UID: "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963"). InnerVolumeSpecName "kube-api-access-6szs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.746468 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" (UID: "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.771163 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" (UID: "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.771325 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-inventory" (OuterVolumeSpecName: "inventory") pod "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" (UID: "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.773433 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" (UID: "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.773921 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" (UID: "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.774933 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" (UID: "95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.842906 4717 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.842941 4717 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.842963 4717 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.842986 4717 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.842999 4717 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.843012 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6szs7\" (UniqueName: \"kubernetes.io/projected/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-kube-api-access-6szs7\") on node \"crc\" DevicePath \"\"" Feb 18 12:31:35 crc kubenswrapper[4717]: I0218 12:31:35.843024 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 12:31:36 crc kubenswrapper[4717]: I0218 12:31:36.179316 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" event={"ID":"95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963","Type":"ContainerDied","Data":"b6bc26d2f7fd4e0ffc975532c131015a8f2d4082d082b415900addc54204ee26"} Feb 18 12:31:36 crc kubenswrapper[4717]: I0218 12:31:36.179707 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6bc26d2f7fd4e0ffc975532c131015a8f2d4082d082b415900addc54204ee26" Feb 18 12:31:36 crc kubenswrapper[4717]: I0218 12:31:36.179419 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv" Feb 18 12:31:39 crc kubenswrapper[4717]: I0218 12:31:39.036520 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:31:39 crc kubenswrapper[4717]: E0218 12:31:39.037876 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:31:51 crc kubenswrapper[4717]: I0218 12:31:51.037630 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:31:51 crc kubenswrapper[4717]: E0218 12:31:51.038692 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:32:02 crc kubenswrapper[4717]: I0218 12:32:02.036093 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:32:02 crc kubenswrapper[4717]: E0218 12:32:02.036902 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:32:13 crc kubenswrapper[4717]: I0218 12:32:13.037422 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:32:13 crc kubenswrapper[4717]: E0218 12:32:13.038659 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:32:24 crc kubenswrapper[4717]: I0218 12:32:24.037228 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:32:24 crc kubenswrapper[4717]: E0218 12:32:24.038008 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.220532 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 12:32:33 crc kubenswrapper[4717]: E0218 12:32:33.222176 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.222198 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 12:32:33 crc kubenswrapper[4717]: E0218 12:32:33.222215 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5" containerName="collect-profiles" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.222222 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5" containerName="collect-profiles" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.222558 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.222591 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b4ae5c-9b05-4ab7-b7a1-c61be7dff4e5" containerName="collect-profiles" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.223390 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.226823 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.227020 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8snck" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.227062 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.227326 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.236255 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.385398 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8qv5\" (UniqueName: \"kubernetes.io/projected/00f1b8ee-1760-4308-b796-155234b0a811-kube-api-access-g8qv5\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.385487 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.385548 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.385602 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.385695 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-config-data\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.385798 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.385924 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.385994 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.386051 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.488416 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.488488 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.488609 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8qv5\" (UniqueName: \"kubernetes.io/projected/00f1b8ee-1760-4308-b796-155234b0a811-kube-api-access-g8qv5\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.488639 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.488668 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.488705 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.488736 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-config-data\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.488777 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.488865 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.489064 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.489474 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.489868 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.490836 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-config-data\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.491199 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.497245 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.499814 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.502034 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.509236 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8qv5\" (UniqueName: \"kubernetes.io/projected/00f1b8ee-1760-4308-b796-155234b0a811-kube-api-access-g8qv5\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.523504 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " pod="openstack/tempest-tests-tempest" Feb 18 12:32:33 crc kubenswrapper[4717]: I0218 12:32:33.564609 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 12:32:34 crc kubenswrapper[4717]: I0218 12:32:34.038997 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:32:34 crc kubenswrapper[4717]: I0218 12:32:34.040004 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 12:32:34 crc kubenswrapper[4717]: I0218 12:32:34.707936 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"00f1b8ee-1760-4308-b796-155234b0a811","Type":"ContainerStarted","Data":"81bf80aa58bb47082cd79efc8d46b8440d16157611b2d368ee91279e0e4f41e6"} Feb 18 12:32:36 crc kubenswrapper[4717]: I0218 12:32:36.037731 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:32:36 crc kubenswrapper[4717]: E0218 12:32:36.038878 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:32:51 crc kubenswrapper[4717]: I0218 12:32:51.037322 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:32:51 crc kubenswrapper[4717]: E0218 12:32:51.038186 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:33:04 crc kubenswrapper[4717]: E0218 12:33:04.882209 4717 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 18 12:33:04 crc kubenswrapper[4717]: E0218 12:33:04.883073 4717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8qv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(00f1b8ee-1760-4308-b796-155234b0a811): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 12:33:04 crc kubenswrapper[4717]: E0218 12:33:04.884335 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="00f1b8ee-1760-4308-b796-155234b0a811" Feb 18 12:33:05 crc kubenswrapper[4717]: E0218 12:33:05.028816 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="00f1b8ee-1760-4308-b796-155234b0a811" Feb 18 12:33:05 crc kubenswrapper[4717]: I0218 12:33:05.037520 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:33:05 crc kubenswrapper[4717]: E0218 12:33:05.037818 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:33:16 crc kubenswrapper[4717]: I0218 12:33:16.036808 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:33:16 crc kubenswrapper[4717]: E0218 12:33:16.038972 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:33:19 crc kubenswrapper[4717]: I0218 12:33:19.159834 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"00f1b8ee-1760-4308-b796-155234b0a811","Type":"ContainerStarted","Data":"85ef672346e0f1dbb1fb7e5cd2e38d46a3f7740b6abe1b547ae617cbad4e6fba"} Feb 18 12:33:19 crc kubenswrapper[4717]: I0218 12:33:19.184800 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.124723362 podStartE2EDuration="47.184776424s" podCreationTimestamp="2026-02-18 12:32:32 +0000 UTC" firstStartedPulling="2026-02-18 12:32:34.038754801 +0000 UTC m=+2588.440856117" lastFinishedPulling="2026-02-18 12:33:17.098807863 +0000 UTC m=+2631.500909179" observedRunningTime="2026-02-18 12:33:19.179662297 +0000 UTC m=+2633.581763623" watchObservedRunningTime="2026-02-18 12:33:19.184776424 +0000 UTC m=+2633.586877750" Feb 18 12:33:31 crc kubenswrapper[4717]: I0218 12:33:31.037522 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:33:31 crc kubenswrapper[4717]: E0218 12:33:31.038387 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:33:45 crc kubenswrapper[4717]: I0218 12:33:45.037073 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:33:45 crc kubenswrapper[4717]: E0218 12:33:45.037945 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:33:57 crc kubenswrapper[4717]: I0218 12:33:57.045707 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:33:57 crc kubenswrapper[4717]: E0218 12:33:57.046523 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:34:08 crc kubenswrapper[4717]: I0218 12:34:08.741011 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8kqrs"] Feb 18 12:34:08 crc kubenswrapper[4717]: I0218 12:34:08.752419 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:08 crc kubenswrapper[4717]: I0218 12:34:08.765154 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8kqrs"] Feb 18 12:34:08 crc kubenswrapper[4717]: I0218 12:34:08.950680 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-catalog-content\") pod \"community-operators-8kqrs\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:08 crc kubenswrapper[4717]: I0218 12:34:08.951249 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqkv8\" (UniqueName: \"kubernetes.io/projected/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-kube-api-access-dqkv8\") pod \"community-operators-8kqrs\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:08 crc kubenswrapper[4717]: I0218 12:34:08.951467 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-utilities\") pod \"community-operators-8kqrs\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:09 crc kubenswrapper[4717]: I0218 12:34:09.054023 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-catalog-content\") pod \"community-operators-8kqrs\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:09 crc kubenswrapper[4717]: I0218 12:34:09.054107 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqkv8\" (UniqueName: \"kubernetes.io/projected/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-kube-api-access-dqkv8\") pod \"community-operators-8kqrs\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:09 crc kubenswrapper[4717]: I0218 12:34:09.054140 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-utilities\") pod \"community-operators-8kqrs\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:09 crc kubenswrapper[4717]: I0218 12:34:09.054734 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-utilities\") pod \"community-operators-8kqrs\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:09 crc kubenswrapper[4717]: I0218 12:34:09.055180 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-catalog-content\") pod \"community-operators-8kqrs\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:09 crc kubenswrapper[4717]: I0218 12:34:09.079906 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqkv8\" (UniqueName: \"kubernetes.io/projected/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-kube-api-access-dqkv8\") pod \"community-operators-8kqrs\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:09 crc kubenswrapper[4717]: I0218 12:34:09.087518 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:09 crc kubenswrapper[4717]: I0218 12:34:09.671695 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8kqrs"] Feb 18 12:34:10 crc kubenswrapper[4717]: I0218 12:34:10.664392 4717 generic.go:334] "Generic (PLEG): container finished" podID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerID="f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462" exitCode=0 Feb 18 12:34:10 crc kubenswrapper[4717]: I0218 12:34:10.664454 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kqrs" event={"ID":"bb5c7d93-ea39-43fb-b18e-36f7beec69c1","Type":"ContainerDied","Data":"f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462"} Feb 18 12:34:10 crc kubenswrapper[4717]: I0218 12:34:10.664710 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kqrs" event={"ID":"bb5c7d93-ea39-43fb-b18e-36f7beec69c1","Type":"ContainerStarted","Data":"abe8c2f2ba9e9bebe7ba57f849f475034db95655cb299abd2e844d301787a451"} Feb 18 12:34:11 crc kubenswrapper[4717]: I0218 12:34:11.036880 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:34:11 crc kubenswrapper[4717]: E0218 12:34:11.037147 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:34:11 crc kubenswrapper[4717]: I0218 12:34:11.675415 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kqrs" event={"ID":"bb5c7d93-ea39-43fb-b18e-36f7beec69c1","Type":"ContainerStarted","Data":"0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa"} Feb 18 12:34:12 crc kubenswrapper[4717]: I0218 12:34:12.687117 4717 generic.go:334] "Generic (PLEG): container finished" podID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerID="0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa" exitCode=0 Feb 18 12:34:12 crc kubenswrapper[4717]: I0218 12:34:12.687196 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kqrs" event={"ID":"bb5c7d93-ea39-43fb-b18e-36f7beec69c1","Type":"ContainerDied","Data":"0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa"} Feb 18 12:34:13 crc kubenswrapper[4717]: I0218 12:34:13.698836 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kqrs" event={"ID":"bb5c7d93-ea39-43fb-b18e-36f7beec69c1","Type":"ContainerStarted","Data":"c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160"} Feb 18 12:34:13 crc kubenswrapper[4717]: I0218 12:34:13.725286 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8kqrs" podStartSLOduration=3.278285431 podStartE2EDuration="5.725248458s" podCreationTimestamp="2026-02-18 12:34:08 +0000 UTC" firstStartedPulling="2026-02-18 12:34:10.666852671 +0000 UTC m=+2685.068953987" lastFinishedPulling="2026-02-18 12:34:13.113815698 +0000 UTC m=+2687.515917014" observedRunningTime="2026-02-18 12:34:13.72007066 +0000 UTC m=+2688.122171976" watchObservedRunningTime="2026-02-18 12:34:13.725248458 +0000 UTC m=+2688.127349774" Feb 18 12:34:19 crc kubenswrapper[4717]: I0218 12:34:19.088822 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:19 crc kubenswrapper[4717]: I0218 12:34:19.089510 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:19 crc kubenswrapper[4717]: I0218 12:34:19.149951 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:19 crc kubenswrapper[4717]: I0218 12:34:19.802196 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:19 crc kubenswrapper[4717]: I0218 12:34:19.856595 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8kqrs"] Feb 18 12:34:21 crc kubenswrapper[4717]: I0218 12:34:21.771190 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8kqrs" podUID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerName="registry-server" containerID="cri-o://c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160" gracePeriod=2 Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.315306 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.364956 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-catalog-content\") pod \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.365022 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-utilities\") pod \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.365133 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqkv8\" (UniqueName: \"kubernetes.io/projected/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-kube-api-access-dqkv8\") pod \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\" (UID: \"bb5c7d93-ea39-43fb-b18e-36f7beec69c1\") " Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.366377 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-utilities" (OuterVolumeSpecName: "utilities") pod "bb5c7d93-ea39-43fb-b18e-36f7beec69c1" (UID: "bb5c7d93-ea39-43fb-b18e-36f7beec69c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.371232 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-kube-api-access-dqkv8" (OuterVolumeSpecName: "kube-api-access-dqkv8") pod "bb5c7d93-ea39-43fb-b18e-36f7beec69c1" (UID: "bb5c7d93-ea39-43fb-b18e-36f7beec69c1"). InnerVolumeSpecName "kube-api-access-dqkv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.432426 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb5c7d93-ea39-43fb-b18e-36f7beec69c1" (UID: "bb5c7d93-ea39-43fb-b18e-36f7beec69c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.467791 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.467841 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.467852 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqkv8\" (UniqueName: \"kubernetes.io/projected/bb5c7d93-ea39-43fb-b18e-36f7beec69c1-kube-api-access-dqkv8\") on node \"crc\" DevicePath \"\"" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.785277 4717 generic.go:334] "Generic (PLEG): container finished" podID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerID="c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160" exitCode=0 Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.785335 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kqrs" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.785331 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kqrs" event={"ID":"bb5c7d93-ea39-43fb-b18e-36f7beec69c1","Type":"ContainerDied","Data":"c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160"} Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.785567 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kqrs" event={"ID":"bb5c7d93-ea39-43fb-b18e-36f7beec69c1","Type":"ContainerDied","Data":"abe8c2f2ba9e9bebe7ba57f849f475034db95655cb299abd2e844d301787a451"} Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.785612 4717 scope.go:117] "RemoveContainer" containerID="c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.823505 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8kqrs"] Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.831868 4717 scope.go:117] "RemoveContainer" containerID="0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.839633 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8kqrs"] Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.860898 4717 scope.go:117] "RemoveContainer" containerID="f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.904194 4717 scope.go:117] "RemoveContainer" containerID="c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160" Feb 18 12:34:22 crc kubenswrapper[4717]: E0218 12:34:22.904784 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160\": container with ID starting with c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160 not found: ID does not exist" containerID="c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.904834 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160"} err="failed to get container status \"c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160\": rpc error: code = NotFound desc = could not find container \"c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160\": container with ID starting with c6e02fd2cdff65ce0afe71afe2e5b785c77f87b4bf7a497ae7328aaca563c160 not found: ID does not exist" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.904866 4717 scope.go:117] "RemoveContainer" containerID="0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa" Feb 18 12:34:22 crc kubenswrapper[4717]: E0218 12:34:22.905333 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa\": container with ID starting with 0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa not found: ID does not exist" containerID="0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.905394 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa"} err="failed to get container status \"0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa\": rpc error: code = NotFound desc = could not find container \"0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa\": container with ID starting with 0cc519ebd92f7a07d365ec91dfa475e44a8f268541955a42ddcf9114f6d0d9aa not found: ID does not exist" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.905428 4717 scope.go:117] "RemoveContainer" containerID="f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462" Feb 18 12:34:22 crc kubenswrapper[4717]: E0218 12:34:22.905922 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462\": container with ID starting with f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462 not found: ID does not exist" containerID="f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462" Feb 18 12:34:22 crc kubenswrapper[4717]: I0218 12:34:22.905964 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462"} err="failed to get container status \"f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462\": rpc error: code = NotFound desc = could not find container \"f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462\": container with ID starting with f79815282bf6e887b257fa5201f9e1312c6143338dc16acd524f29581d756462 not found: ID does not exist" Feb 18 12:34:23 crc kubenswrapper[4717]: I0218 12:34:23.053369 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" path="/var/lib/kubelet/pods/bb5c7d93-ea39-43fb-b18e-36f7beec69c1/volumes" Feb 18 12:34:24 crc kubenswrapper[4717]: I0218 12:34:24.036640 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:34:24 crc kubenswrapper[4717]: E0218 12:34:24.036871 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:34:36 crc kubenswrapper[4717]: I0218 12:34:36.037168 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:34:36 crc kubenswrapper[4717]: E0218 12:34:36.038208 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:34:47 crc kubenswrapper[4717]: I0218 12:34:47.046548 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:34:48 crc kubenswrapper[4717]: I0218 12:34:48.014118 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"9adf96ebc2f2c8d83c4d469fe4c4bd4074d20e003b5cc237ff59f1cf54d44c39"} Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.735887 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7nrqk"] Feb 18 12:34:53 crc kubenswrapper[4717]: E0218 12:34:53.740882 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerName="extract-content" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.740915 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerName="extract-content" Feb 18 12:34:53 crc kubenswrapper[4717]: E0218 12:34:53.740952 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerName="extract-utilities" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.740965 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerName="extract-utilities" Feb 18 12:34:53 crc kubenswrapper[4717]: E0218 12:34:53.741043 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerName="registry-server" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.741059 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerName="registry-server" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.741352 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5c7d93-ea39-43fb-b18e-36f7beec69c1" containerName="registry-server" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.743006 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.757757 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7nrqk"] Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.760077 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-catalog-content\") pod \"certified-operators-7nrqk\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.760171 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np4vg\" (UniqueName: \"kubernetes.io/projected/1723d26f-9319-4889-a714-8d7bf3ad9ef5-kube-api-access-np4vg\") pod \"certified-operators-7nrqk\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.760207 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-utilities\") pod \"certified-operators-7nrqk\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.861466 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-catalog-content\") pod \"certified-operators-7nrqk\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.861540 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np4vg\" (UniqueName: \"kubernetes.io/projected/1723d26f-9319-4889-a714-8d7bf3ad9ef5-kube-api-access-np4vg\") pod \"certified-operators-7nrqk\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.861565 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-utilities\") pod \"certified-operators-7nrqk\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.862115 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-utilities\") pod \"certified-operators-7nrqk\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.862554 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-catalog-content\") pod \"certified-operators-7nrqk\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:53 crc kubenswrapper[4717]: I0218 12:34:53.886376 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np4vg\" (UniqueName: \"kubernetes.io/projected/1723d26f-9319-4889-a714-8d7bf3ad9ef5-kube-api-access-np4vg\") pod \"certified-operators-7nrqk\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:54 crc kubenswrapper[4717]: I0218 12:34:54.073181 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:34:54 crc kubenswrapper[4717]: I0218 12:34:54.724131 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7nrqk"] Feb 18 12:34:55 crc kubenswrapper[4717]: I0218 12:34:55.097987 4717 generic.go:334] "Generic (PLEG): container finished" podID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerID="e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d" exitCode=0 Feb 18 12:34:55 crc kubenswrapper[4717]: I0218 12:34:55.098091 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nrqk" event={"ID":"1723d26f-9319-4889-a714-8d7bf3ad9ef5","Type":"ContainerDied","Data":"e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d"} Feb 18 12:34:55 crc kubenswrapper[4717]: I0218 12:34:55.098522 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nrqk" event={"ID":"1723d26f-9319-4889-a714-8d7bf3ad9ef5","Type":"ContainerStarted","Data":"4f9ebc6bdc9089424300fce7be3b4b844d28c93ec01f5e0ba862ce255d589860"} Feb 18 12:34:56 crc kubenswrapper[4717]: I0218 12:34:56.112879 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nrqk" event={"ID":"1723d26f-9319-4889-a714-8d7bf3ad9ef5","Type":"ContainerStarted","Data":"65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5"} Feb 18 12:34:59 crc kubenswrapper[4717]: I0218 12:34:59.142021 4717 generic.go:334] "Generic (PLEG): container finished" podID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerID="65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5" exitCode=0 Feb 18 12:34:59 crc kubenswrapper[4717]: I0218 12:34:59.142119 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nrqk" event={"ID":"1723d26f-9319-4889-a714-8d7bf3ad9ef5","Type":"ContainerDied","Data":"65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5"} Feb 18 12:35:00 crc kubenswrapper[4717]: I0218 12:35:00.153570 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nrqk" event={"ID":"1723d26f-9319-4889-a714-8d7bf3ad9ef5","Type":"ContainerStarted","Data":"4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61"} Feb 18 12:35:00 crc kubenswrapper[4717]: I0218 12:35:00.175833 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7nrqk" podStartSLOduration=2.749871461 podStartE2EDuration="7.175805657s" podCreationTimestamp="2026-02-18 12:34:53 +0000 UTC" firstStartedPulling="2026-02-18 12:34:55.100095345 +0000 UTC m=+2729.502196661" lastFinishedPulling="2026-02-18 12:34:59.526029541 +0000 UTC m=+2733.928130857" observedRunningTime="2026-02-18 12:35:00.173749178 +0000 UTC m=+2734.575850514" watchObservedRunningTime="2026-02-18 12:35:00.175805657 +0000 UTC m=+2734.577906973" Feb 18 12:35:04 crc kubenswrapper[4717]: I0218 12:35:04.073608 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:35:04 crc kubenswrapper[4717]: I0218 12:35:04.074311 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:35:04 crc kubenswrapper[4717]: I0218 12:35:04.125949 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:35:04 crc kubenswrapper[4717]: I0218 12:35:04.228682 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:35:04 crc kubenswrapper[4717]: I0218 12:35:04.366037 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7nrqk"] Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.205123 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7nrqk" podUID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerName="registry-server" containerID="cri-o://4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61" gracePeriod=2 Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.667993 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.755299 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-catalog-content\") pod \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.755479 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-utilities\") pod \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.755536 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np4vg\" (UniqueName: \"kubernetes.io/projected/1723d26f-9319-4889-a714-8d7bf3ad9ef5-kube-api-access-np4vg\") pod \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\" (UID: \"1723d26f-9319-4889-a714-8d7bf3ad9ef5\") " Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.757157 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-utilities" (OuterVolumeSpecName: "utilities") pod "1723d26f-9319-4889-a714-8d7bf3ad9ef5" (UID: "1723d26f-9319-4889-a714-8d7bf3ad9ef5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.762442 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1723d26f-9319-4889-a714-8d7bf3ad9ef5-kube-api-access-np4vg" (OuterVolumeSpecName: "kube-api-access-np4vg") pod "1723d26f-9319-4889-a714-8d7bf3ad9ef5" (UID: "1723d26f-9319-4889-a714-8d7bf3ad9ef5"). InnerVolumeSpecName "kube-api-access-np4vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.814791 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1723d26f-9319-4889-a714-8d7bf3ad9ef5" (UID: "1723d26f-9319-4889-a714-8d7bf3ad9ef5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.857969 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.858009 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np4vg\" (UniqueName: \"kubernetes.io/projected/1723d26f-9319-4889-a714-8d7bf3ad9ef5-kube-api-access-np4vg\") on node \"crc\" DevicePath \"\"" Feb 18 12:35:06 crc kubenswrapper[4717]: I0218 12:35:06.858024 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1723d26f-9319-4889-a714-8d7bf3ad9ef5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.218437 4717 generic.go:334] "Generic (PLEG): container finished" podID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerID="4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61" exitCode=0 Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.218513 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nrqk" event={"ID":"1723d26f-9319-4889-a714-8d7bf3ad9ef5","Type":"ContainerDied","Data":"4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61"} Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.220227 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7nrqk" event={"ID":"1723d26f-9319-4889-a714-8d7bf3ad9ef5","Type":"ContainerDied","Data":"4f9ebc6bdc9089424300fce7be3b4b844d28c93ec01f5e0ba862ce255d589860"} Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.218530 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7nrqk" Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.220302 4717 scope.go:117] "RemoveContainer" containerID="4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61" Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.250211 4717 scope.go:117] "RemoveContainer" containerID="65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5" Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.261853 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7nrqk"] Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.273181 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7nrqk"] Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.278500 4717 scope.go:117] "RemoveContainer" containerID="e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d" Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.320781 4717 scope.go:117] "RemoveContainer" containerID="4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61" Feb 18 12:35:07 crc kubenswrapper[4717]: E0218 12:35:07.323236 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61\": container with ID starting with 4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61 not found: ID does not exist" containerID="4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61" Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.323413 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61"} err="failed to get container status \"4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61\": rpc error: code = NotFound desc = could not find container \"4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61\": container with ID starting with 4817cd078b96a6a328eedeb053c9a52a127471ea2c83165a6df3293ecac94f61 not found: ID does not exist" Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.323534 4717 scope.go:117] "RemoveContainer" containerID="65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5" Feb 18 12:35:07 crc kubenswrapper[4717]: E0218 12:35:07.323964 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5\": container with ID starting with 65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5 not found: ID does not exist" containerID="65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5" Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.324038 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5"} err="failed to get container status \"65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5\": rpc error: code = NotFound desc = could not find container \"65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5\": container with ID starting with 65966768751f4d19bbfef17337c5e5318d3b9edf7e4e49e4749a9026d3e5d7e5 not found: ID does not exist" Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.324114 4717 scope.go:117] "RemoveContainer" containerID="e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d" Feb 18 12:35:07 crc kubenswrapper[4717]: E0218 12:35:07.324783 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d\": container with ID starting with e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d not found: ID does not exist" containerID="e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d" Feb 18 12:35:07 crc kubenswrapper[4717]: I0218 12:35:07.324904 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d"} err="failed to get container status \"e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d\": rpc error: code = NotFound desc = could not find container \"e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d\": container with ID starting with e603b997470a886a3fefa07b76e55e07bc4faf0c43fc04bb39311edbcc40cd8d not found: ID does not exist" Feb 18 12:35:09 crc kubenswrapper[4717]: I0218 12:35:09.063672 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" path="/var/lib/kubelet/pods/1723d26f-9319-4889-a714-8d7bf3ad9ef5/volumes" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.179794 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5xtf"] Feb 18 12:35:16 crc kubenswrapper[4717]: E0218 12:35:16.184783 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerName="extract-content" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.184805 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerName="extract-content" Feb 18 12:35:16 crc kubenswrapper[4717]: E0218 12:35:16.184824 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerName="extract-utilities" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.184831 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerName="extract-utilities" Feb 18 12:35:16 crc kubenswrapper[4717]: E0218 12:35:16.184852 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerName="registry-server" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.184860 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerName="registry-server" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.185121 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="1723d26f-9319-4889-a714-8d7bf3ad9ef5" containerName="registry-server" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.186695 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.209202 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5xtf"] Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.293997 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-catalog-content\") pod \"redhat-marketplace-n5xtf\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.294109 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-utilities\") pod \"redhat-marketplace-n5xtf\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.294143 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxng9\" (UniqueName: \"kubernetes.io/projected/79597d5a-1a26-4e38-a187-81877a152ab0-kube-api-access-dxng9\") pod \"redhat-marketplace-n5xtf\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.396327 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-catalog-content\") pod \"redhat-marketplace-n5xtf\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.396428 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-utilities\") pod \"redhat-marketplace-n5xtf\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.396462 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxng9\" (UniqueName: \"kubernetes.io/projected/79597d5a-1a26-4e38-a187-81877a152ab0-kube-api-access-dxng9\") pod \"redhat-marketplace-n5xtf\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.397477 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-catalog-content\") pod \"redhat-marketplace-n5xtf\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.397605 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-utilities\") pod \"redhat-marketplace-n5xtf\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.429796 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxng9\" (UniqueName: \"kubernetes.io/projected/79597d5a-1a26-4e38-a187-81877a152ab0-kube-api-access-dxng9\") pod \"redhat-marketplace-n5xtf\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:16 crc kubenswrapper[4717]: I0218 12:35:16.547116 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:17 crc kubenswrapper[4717]: I0218 12:35:17.054597 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5xtf"] Feb 18 12:35:17 crc kubenswrapper[4717]: I0218 12:35:17.361372 4717 generic.go:334] "Generic (PLEG): container finished" podID="79597d5a-1a26-4e38-a187-81877a152ab0" containerID="43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998" exitCode=0 Feb 18 12:35:17 crc kubenswrapper[4717]: I0218 12:35:17.361434 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5xtf" event={"ID":"79597d5a-1a26-4e38-a187-81877a152ab0","Type":"ContainerDied","Data":"43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998"} Feb 18 12:35:17 crc kubenswrapper[4717]: I0218 12:35:17.361478 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5xtf" event={"ID":"79597d5a-1a26-4e38-a187-81877a152ab0","Type":"ContainerStarted","Data":"63920862c05da91a42ce5530218dac1cf5475f484a8c2958dc8bcffedc98c105"} Feb 18 12:35:19 crc kubenswrapper[4717]: I0218 12:35:19.582405 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5xtf" event={"ID":"79597d5a-1a26-4e38-a187-81877a152ab0","Type":"ContainerStarted","Data":"1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e"} Feb 18 12:35:20 crc kubenswrapper[4717]: I0218 12:35:20.593635 4717 generic.go:334] "Generic (PLEG): container finished" podID="79597d5a-1a26-4e38-a187-81877a152ab0" containerID="1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e" exitCode=0 Feb 18 12:35:20 crc kubenswrapper[4717]: I0218 12:35:20.593700 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5xtf" event={"ID":"79597d5a-1a26-4e38-a187-81877a152ab0","Type":"ContainerDied","Data":"1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e"} Feb 18 12:35:21 crc kubenswrapper[4717]: I0218 12:35:21.604440 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5xtf" event={"ID":"79597d5a-1a26-4e38-a187-81877a152ab0","Type":"ContainerStarted","Data":"15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3"} Feb 18 12:35:21 crc kubenswrapper[4717]: I0218 12:35:21.629604 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5xtf" podStartSLOduration=1.9954934469999999 podStartE2EDuration="5.629578341s" podCreationTimestamp="2026-02-18 12:35:16 +0000 UTC" firstStartedPulling="2026-02-18 12:35:17.363412297 +0000 UTC m=+2751.765513613" lastFinishedPulling="2026-02-18 12:35:20.997497191 +0000 UTC m=+2755.399598507" observedRunningTime="2026-02-18 12:35:21.622148299 +0000 UTC m=+2756.024249625" watchObservedRunningTime="2026-02-18 12:35:21.629578341 +0000 UTC m=+2756.031679667" Feb 18 12:35:26 crc kubenswrapper[4717]: I0218 12:35:26.547567 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:26 crc kubenswrapper[4717]: I0218 12:35:26.548196 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:26 crc kubenswrapper[4717]: I0218 12:35:26.594672 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:26 crc kubenswrapper[4717]: I0218 12:35:26.691368 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:29 crc kubenswrapper[4717]: I0218 12:35:29.562867 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5xtf"] Feb 18 12:35:29 crc kubenswrapper[4717]: I0218 12:35:29.563454 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5xtf" podUID="79597d5a-1a26-4e38-a187-81877a152ab0" containerName="registry-server" containerID="cri-o://15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3" gracePeriod=2 Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.130821 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.332182 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxng9\" (UniqueName: \"kubernetes.io/projected/79597d5a-1a26-4e38-a187-81877a152ab0-kube-api-access-dxng9\") pod \"79597d5a-1a26-4e38-a187-81877a152ab0\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.332249 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-catalog-content\") pod \"79597d5a-1a26-4e38-a187-81877a152ab0\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.332368 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-utilities\") pod \"79597d5a-1a26-4e38-a187-81877a152ab0\" (UID: \"79597d5a-1a26-4e38-a187-81877a152ab0\") " Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.333285 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-utilities" (OuterVolumeSpecName: "utilities") pod "79597d5a-1a26-4e38-a187-81877a152ab0" (UID: "79597d5a-1a26-4e38-a187-81877a152ab0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.333657 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.341674 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79597d5a-1a26-4e38-a187-81877a152ab0-kube-api-access-dxng9" (OuterVolumeSpecName: "kube-api-access-dxng9") pod "79597d5a-1a26-4e38-a187-81877a152ab0" (UID: "79597d5a-1a26-4e38-a187-81877a152ab0"). InnerVolumeSpecName "kube-api-access-dxng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.358088 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79597d5a-1a26-4e38-a187-81877a152ab0" (UID: "79597d5a-1a26-4e38-a187-81877a152ab0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.435935 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxng9\" (UniqueName: \"kubernetes.io/projected/79597d5a-1a26-4e38-a187-81877a152ab0-kube-api-access-dxng9\") on node \"crc\" DevicePath \"\"" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.436480 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79597d5a-1a26-4e38-a187-81877a152ab0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.682608 4717 generic.go:334] "Generic (PLEG): container finished" podID="79597d5a-1a26-4e38-a187-81877a152ab0" containerID="15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3" exitCode=0 Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.682654 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5xtf" event={"ID":"79597d5a-1a26-4e38-a187-81877a152ab0","Type":"ContainerDied","Data":"15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3"} Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.682681 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5xtf" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.682716 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5xtf" event={"ID":"79597d5a-1a26-4e38-a187-81877a152ab0","Type":"ContainerDied","Data":"63920862c05da91a42ce5530218dac1cf5475f484a8c2958dc8bcffedc98c105"} Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.682734 4717 scope.go:117] "RemoveContainer" containerID="15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.727372 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5xtf"] Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.749227 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5xtf"] Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.751909 4717 scope.go:117] "RemoveContainer" containerID="1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.775700 4717 scope.go:117] "RemoveContainer" containerID="43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.821957 4717 scope.go:117] "RemoveContainer" containerID="15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3" Feb 18 12:35:30 crc kubenswrapper[4717]: E0218 12:35:30.822648 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3\": container with ID starting with 15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3 not found: ID does not exist" containerID="15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.822716 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3"} err="failed to get container status \"15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3\": rpc error: code = NotFound desc = could not find container \"15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3\": container with ID starting with 15aec5578ad80985836846f30828bcc05101b03f4fa756813859725bbd3db6a3 not found: ID does not exist" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.822758 4717 scope.go:117] "RemoveContainer" containerID="1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e" Feb 18 12:35:30 crc kubenswrapper[4717]: E0218 12:35:30.823389 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e\": container with ID starting with 1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e not found: ID does not exist" containerID="1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.823471 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e"} err="failed to get container status \"1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e\": rpc error: code = NotFound desc = could not find container \"1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e\": container with ID starting with 1225579abae3071fbdbe1c85d3717318de6422300bbf3061a221231a4cfc5c8e not found: ID does not exist" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.823516 4717 scope.go:117] "RemoveContainer" containerID="43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998" Feb 18 12:35:30 crc kubenswrapper[4717]: E0218 12:35:30.824185 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998\": container with ID starting with 43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998 not found: ID does not exist" containerID="43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998" Feb 18 12:35:30 crc kubenswrapper[4717]: I0218 12:35:30.824236 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998"} err="failed to get container status \"43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998\": rpc error: code = NotFound desc = could not find container \"43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998\": container with ID starting with 43f82c5d3fbced753b5a47a650afc5cb8d2c065fe8d9a2e4b2359be10cd58998 not found: ID does not exist" Feb 18 12:35:31 crc kubenswrapper[4717]: I0218 12:35:31.052038 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79597d5a-1a26-4e38-a187-81877a152ab0" path="/var/lib/kubelet/pods/79597d5a-1a26-4e38-a187-81877a152ab0/volumes" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.605516 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gp2r2"] Feb 18 12:36:02 crc kubenswrapper[4717]: E0218 12:36:02.607524 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79597d5a-1a26-4e38-a187-81877a152ab0" containerName="registry-server" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.607625 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="79597d5a-1a26-4e38-a187-81877a152ab0" containerName="registry-server" Feb 18 12:36:02 crc kubenswrapper[4717]: E0218 12:36:02.607708 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79597d5a-1a26-4e38-a187-81877a152ab0" containerName="extract-utilities" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.607777 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="79597d5a-1a26-4e38-a187-81877a152ab0" containerName="extract-utilities" Feb 18 12:36:02 crc kubenswrapper[4717]: E0218 12:36:02.607835 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79597d5a-1a26-4e38-a187-81877a152ab0" containerName="extract-content" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.607895 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="79597d5a-1a26-4e38-a187-81877a152ab0" containerName="extract-content" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.608165 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="79597d5a-1a26-4e38-a187-81877a152ab0" containerName="registry-server" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.609830 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.640012 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gp2r2"] Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.737955 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6724ff6a-bb09-4f26-a225-c815a47da5fc-catalog-content\") pod \"redhat-operators-gp2r2\" (UID: \"6724ff6a-bb09-4f26-a225-c815a47da5fc\") " pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.738080 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6724ff6a-bb09-4f26-a225-c815a47da5fc-utilities\") pod \"redhat-operators-gp2r2\" (UID: \"6724ff6a-bb09-4f26-a225-c815a47da5fc\") " pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.738464 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76sdw\" (UniqueName: \"kubernetes.io/projected/6724ff6a-bb09-4f26-a225-c815a47da5fc-kube-api-access-76sdw\") pod \"redhat-operators-gp2r2\" (UID: \"6724ff6a-bb09-4f26-a225-c815a47da5fc\") " pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.840943 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6724ff6a-bb09-4f26-a225-c815a47da5fc-catalog-content\") pod \"redhat-operators-gp2r2\" (UID: \"6724ff6a-bb09-4f26-a225-c815a47da5fc\") " pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.841459 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6724ff6a-bb09-4f26-a225-c815a47da5fc-utilities\") pod \"redhat-operators-gp2r2\" (UID: \"6724ff6a-bb09-4f26-a225-c815a47da5fc\") " pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.841628 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6724ff6a-bb09-4f26-a225-c815a47da5fc-catalog-content\") pod \"redhat-operators-gp2r2\" (UID: \"6724ff6a-bb09-4f26-a225-c815a47da5fc\") " pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.841851 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76sdw\" (UniqueName: \"kubernetes.io/projected/6724ff6a-bb09-4f26-a225-c815a47da5fc-kube-api-access-76sdw\") pod \"redhat-operators-gp2r2\" (UID: \"6724ff6a-bb09-4f26-a225-c815a47da5fc\") " pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.842060 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6724ff6a-bb09-4f26-a225-c815a47da5fc-utilities\") pod \"redhat-operators-gp2r2\" (UID: \"6724ff6a-bb09-4f26-a225-c815a47da5fc\") " pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.861714 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76sdw\" (UniqueName: \"kubernetes.io/projected/6724ff6a-bb09-4f26-a225-c815a47da5fc-kube-api-access-76sdw\") pod \"redhat-operators-gp2r2\" (UID: \"6724ff6a-bb09-4f26-a225-c815a47da5fc\") " pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:02 crc kubenswrapper[4717]: I0218 12:36:02.939022 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:03 crc kubenswrapper[4717]: I0218 12:36:03.517106 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gp2r2"] Feb 18 12:36:04 crc kubenswrapper[4717]: I0218 12:36:04.302352 4717 generic.go:334] "Generic (PLEG): container finished" podID="6724ff6a-bb09-4f26-a225-c815a47da5fc" containerID="c0f9183b7e0a5276b3ce84d5588afa3dc972904562d2f7bbed99bd69556b06f7" exitCode=0 Feb 18 12:36:04 crc kubenswrapper[4717]: I0218 12:36:04.302665 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp2r2" event={"ID":"6724ff6a-bb09-4f26-a225-c815a47da5fc","Type":"ContainerDied","Data":"c0f9183b7e0a5276b3ce84d5588afa3dc972904562d2f7bbed99bd69556b06f7"} Feb 18 12:36:04 crc kubenswrapper[4717]: I0218 12:36:04.302695 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp2r2" event={"ID":"6724ff6a-bb09-4f26-a225-c815a47da5fc","Type":"ContainerStarted","Data":"c15773d751fb36164cc23be61897781b9ac88d1271c5f21c65b64c55430dfe78"} Feb 18 12:36:14 crc kubenswrapper[4717]: I0218 12:36:14.409133 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp2r2" event={"ID":"6724ff6a-bb09-4f26-a225-c815a47da5fc","Type":"ContainerStarted","Data":"9163b627a5bb178e6b75433c46779b33a805e2561ff2cf4caba3854ef16ef433"} Feb 18 12:36:15 crc kubenswrapper[4717]: I0218 12:36:15.420309 4717 generic.go:334] "Generic (PLEG): container finished" podID="6724ff6a-bb09-4f26-a225-c815a47da5fc" containerID="9163b627a5bb178e6b75433c46779b33a805e2561ff2cf4caba3854ef16ef433" exitCode=0 Feb 18 12:36:15 crc kubenswrapper[4717]: I0218 12:36:15.420388 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp2r2" event={"ID":"6724ff6a-bb09-4f26-a225-c815a47da5fc","Type":"ContainerDied","Data":"9163b627a5bb178e6b75433c46779b33a805e2561ff2cf4caba3854ef16ef433"} Feb 18 12:36:16 crc kubenswrapper[4717]: I0218 12:36:16.436564 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gp2r2" event={"ID":"6724ff6a-bb09-4f26-a225-c815a47da5fc","Type":"ContainerStarted","Data":"40dbcad0589e6cd3c6b2b597b8d751d4687dca2d340c02754debd4300eb530bb"} Feb 18 12:36:16 crc kubenswrapper[4717]: I0218 12:36:16.463777 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gp2r2" podStartSLOduration=2.942396954 podStartE2EDuration="14.463757068s" podCreationTimestamp="2026-02-18 12:36:02 +0000 UTC" firstStartedPulling="2026-02-18 12:36:04.30742314 +0000 UTC m=+2798.709524456" lastFinishedPulling="2026-02-18 12:36:15.828783254 +0000 UTC m=+2810.230884570" observedRunningTime="2026-02-18 12:36:16.461980337 +0000 UTC m=+2810.864081653" watchObservedRunningTime="2026-02-18 12:36:16.463757068 +0000 UTC m=+2810.865858384" Feb 18 12:36:22 crc kubenswrapper[4717]: I0218 12:36:22.940151 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:22 crc kubenswrapper[4717]: I0218 12:36:22.940825 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:22 crc kubenswrapper[4717]: I0218 12:36:22.995409 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:23 crc kubenswrapper[4717]: I0218 12:36:23.549219 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gp2r2" Feb 18 12:36:23 crc kubenswrapper[4717]: I0218 12:36:23.634027 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gp2r2"] Feb 18 12:36:23 crc kubenswrapper[4717]: I0218 12:36:23.678315 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24h5b"] Feb 18 12:36:23 crc kubenswrapper[4717]: I0218 12:36:23.678638 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-24h5b" podUID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerName="registry-server" containerID="cri-o://3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279" gracePeriod=2 Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.406346 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.513003 4717 generic.go:334] "Generic (PLEG): container finished" podID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerID="3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279" exitCode=0 Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.513288 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24h5b" event={"ID":"9c91e056-1a1c-4444-bb0a-7557342ee962","Type":"ContainerDied","Data":"3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279"} Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.513326 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24h5b" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.513349 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24h5b" event={"ID":"9c91e056-1a1c-4444-bb0a-7557342ee962","Type":"ContainerDied","Data":"11be05aa90af9812ac4a8cfbaffe2368f2644ab63a52e5d9d54fd69a6920f288"} Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.513375 4717 scope.go:117] "RemoveContainer" containerID="3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.534451 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-utilities\") pod \"9c91e056-1a1c-4444-bb0a-7557342ee962\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.534599 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cpvl\" (UniqueName: \"kubernetes.io/projected/9c91e056-1a1c-4444-bb0a-7557342ee962-kube-api-access-5cpvl\") pod \"9c91e056-1a1c-4444-bb0a-7557342ee962\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.534645 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-catalog-content\") pod \"9c91e056-1a1c-4444-bb0a-7557342ee962\" (UID: \"9c91e056-1a1c-4444-bb0a-7557342ee962\") " Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.537020 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-utilities" (OuterVolumeSpecName: "utilities") pod "9c91e056-1a1c-4444-bb0a-7557342ee962" (UID: "9c91e056-1a1c-4444-bb0a-7557342ee962"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.559090 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c91e056-1a1c-4444-bb0a-7557342ee962-kube-api-access-5cpvl" (OuterVolumeSpecName: "kube-api-access-5cpvl") pod "9c91e056-1a1c-4444-bb0a-7557342ee962" (UID: "9c91e056-1a1c-4444-bb0a-7557342ee962"). InnerVolumeSpecName "kube-api-access-5cpvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.563924 4717 scope.go:117] "RemoveContainer" containerID="ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.617868 4717 scope.go:117] "RemoveContainer" containerID="83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.637321 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.637366 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cpvl\" (UniqueName: \"kubernetes.io/projected/9c91e056-1a1c-4444-bb0a-7557342ee962-kube-api-access-5cpvl\") on node \"crc\" DevicePath \"\"" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.680875 4717 scope.go:117] "RemoveContainer" containerID="3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279" Feb 18 12:36:24 crc kubenswrapper[4717]: E0218 12:36:24.682981 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279\": container with ID starting with 3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279 not found: ID does not exist" containerID="3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.683030 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279"} err="failed to get container status \"3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279\": rpc error: code = NotFound desc = could not find container \"3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279\": container with ID starting with 3b6025601e3a13c047dc0ea069a3ac95d181102c04704ce672624fa3eaa53279 not found: ID does not exist" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.683059 4717 scope.go:117] "RemoveContainer" containerID="ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc" Feb 18 12:36:24 crc kubenswrapper[4717]: E0218 12:36:24.683462 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc\": container with ID starting with ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc not found: ID does not exist" containerID="ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.683500 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc"} err="failed to get container status \"ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc\": rpc error: code = NotFound desc = could not find container \"ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc\": container with ID starting with ea96444c95a8525fbd1cb3f666de50047001e8d4e869f324da83305fef7c8bfc not found: ID does not exist" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.683520 4717 scope.go:117] "RemoveContainer" containerID="83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67" Feb 18 12:36:24 crc kubenswrapper[4717]: E0218 12:36:24.683843 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67\": container with ID starting with 83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67 not found: ID does not exist" containerID="83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.683876 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67"} err="failed to get container status \"83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67\": rpc error: code = NotFound desc = could not find container \"83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67\": container with ID starting with 83c6e2c85b189c2d327fcac00b663b74430f20a1fed48e8b396aaecc3c08be67 not found: ID does not exist" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.798106 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c91e056-1a1c-4444-bb0a-7557342ee962" (UID: "9c91e056-1a1c-4444-bb0a-7557342ee962"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.842283 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c91e056-1a1c-4444-bb0a-7557342ee962-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.894068 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24h5b"] Feb 18 12:36:24 crc kubenswrapper[4717]: I0218 12:36:24.902541 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-24h5b"] Feb 18 12:36:25 crc kubenswrapper[4717]: I0218 12:36:25.052071 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c91e056-1a1c-4444-bb0a-7557342ee962" path="/var/lib/kubelet/pods/9c91e056-1a1c-4444-bb0a-7557342ee962/volumes" Feb 18 12:37:12 crc kubenswrapper[4717]: I0218 12:37:12.773030 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:37:12 crc kubenswrapper[4717]: I0218 12:37:12.773679 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:37:42 crc kubenswrapper[4717]: I0218 12:37:42.772858 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:37:42 crc kubenswrapper[4717]: I0218 12:37:42.773441 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:38:12 crc kubenswrapper[4717]: I0218 12:38:12.772886 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:38:12 crc kubenswrapper[4717]: I0218 12:38:12.773508 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:38:12 crc kubenswrapper[4717]: I0218 12:38:12.773562 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:38:12 crc kubenswrapper[4717]: I0218 12:38:12.774534 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9adf96ebc2f2c8d83c4d469fe4c4bd4074d20e003b5cc237ff59f1cf54d44c39"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:38:12 crc kubenswrapper[4717]: I0218 12:38:12.774585 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://9adf96ebc2f2c8d83c4d469fe4c4bd4074d20e003b5cc237ff59f1cf54d44c39" gracePeriod=600 Feb 18 12:38:13 crc kubenswrapper[4717]: I0218 12:38:13.699080 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="9adf96ebc2f2c8d83c4d469fe4c4bd4074d20e003b5cc237ff59f1cf54d44c39" exitCode=0 Feb 18 12:38:13 crc kubenswrapper[4717]: I0218 12:38:13.699146 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"9adf96ebc2f2c8d83c4d469fe4c4bd4074d20e003b5cc237ff59f1cf54d44c39"} Feb 18 12:38:13 crc kubenswrapper[4717]: I0218 12:38:13.700333 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0"} Feb 18 12:38:13 crc kubenswrapper[4717]: I0218 12:38:13.700465 4717 scope.go:117] "RemoveContainer" containerID="1e80a412ecb2a9b712d5e8dc4325b69bf75c2bc356860fff8f10be5348098931" Feb 18 12:40:42 crc kubenswrapper[4717]: I0218 12:40:42.772720 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:40:42 crc kubenswrapper[4717]: I0218 12:40:42.773403 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:41:12 crc kubenswrapper[4717]: I0218 12:41:12.772730 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:41:12 crc kubenswrapper[4717]: I0218 12:41:12.773383 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:41:42 crc kubenswrapper[4717]: I0218 12:41:42.773287 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:41:42 crc kubenswrapper[4717]: I0218 12:41:42.774002 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:41:42 crc kubenswrapper[4717]: I0218 12:41:42.774062 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:41:42 crc kubenswrapper[4717]: I0218 12:41:42.774974 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:41:42 crc kubenswrapper[4717]: I0218 12:41:42.775024 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" gracePeriod=600 Feb 18 12:41:42 crc kubenswrapper[4717]: E0218 12:41:42.915366 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:41:43 crc kubenswrapper[4717]: I0218 12:41:43.708066 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" exitCode=0 Feb 18 12:41:43 crc kubenswrapper[4717]: I0218 12:41:43.708137 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0"} Feb 18 12:41:43 crc kubenswrapper[4717]: I0218 12:41:43.708592 4717 scope.go:117] "RemoveContainer" containerID="9adf96ebc2f2c8d83c4d469fe4c4bd4074d20e003b5cc237ff59f1cf54d44c39" Feb 18 12:41:43 crc kubenswrapper[4717]: I0218 12:41:43.709583 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:41:43 crc kubenswrapper[4717]: E0218 12:41:43.710870 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:41:55 crc kubenswrapper[4717]: I0218 12:41:55.037737 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:41:55 crc kubenswrapper[4717]: E0218 12:41:55.038600 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:42:10 crc kubenswrapper[4717]: I0218 12:42:10.036914 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:42:10 crc kubenswrapper[4717]: E0218 12:42:10.037844 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:42:25 crc kubenswrapper[4717]: I0218 12:42:25.036995 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:42:25 crc kubenswrapper[4717]: E0218 12:42:25.037768 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:42:38 crc kubenswrapper[4717]: I0218 12:42:38.037358 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:42:38 crc kubenswrapper[4717]: E0218 12:42:38.038138 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:42:53 crc kubenswrapper[4717]: I0218 12:42:53.038161 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:42:53 crc kubenswrapper[4717]: E0218 12:42:53.038942 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:43:04 crc kubenswrapper[4717]: I0218 12:43:04.037900 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:43:04 crc kubenswrapper[4717]: E0218 12:43:04.038897 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:43:15 crc kubenswrapper[4717]: I0218 12:43:15.037725 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:43:15 crc kubenswrapper[4717]: E0218 12:43:15.038577 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:43:28 crc kubenswrapper[4717]: I0218 12:43:28.036328 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:43:28 crc kubenswrapper[4717]: E0218 12:43:28.037091 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:43:39 crc kubenswrapper[4717]: I0218 12:43:39.037337 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:43:39 crc kubenswrapper[4717]: E0218 12:43:39.038397 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:43:54 crc kubenswrapper[4717]: I0218 12:43:54.036579 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:43:54 crc kubenswrapper[4717]: E0218 12:43:54.037380 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:44:05 crc kubenswrapper[4717]: I0218 12:44:05.037326 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:44:05 crc kubenswrapper[4717]: E0218 12:44:05.038333 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:44:18 crc kubenswrapper[4717]: I0218 12:44:18.036688 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:44:18 crc kubenswrapper[4717]: E0218 12:44:18.038601 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:44:30 crc kubenswrapper[4717]: I0218 12:44:30.037385 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:44:30 crc kubenswrapper[4717]: E0218 12:44:30.038391 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:44:32 crc kubenswrapper[4717]: I0218 12:44:32.325444 4717 generic.go:334] "Generic (PLEG): container finished" podID="00f1b8ee-1760-4308-b796-155234b0a811" containerID="85ef672346e0f1dbb1fb7e5cd2e38d46a3f7740b6abe1b547ae617cbad4e6fba" exitCode=0 Feb 18 12:44:32 crc kubenswrapper[4717]: I0218 12:44:32.325524 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"00f1b8ee-1760-4308-b796-155234b0a811","Type":"ContainerDied","Data":"85ef672346e0f1dbb1fb7e5cd2e38d46a3f7740b6abe1b547ae617cbad4e6fba"} Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.716134 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.905542 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-workdir\") pod \"00f1b8ee-1760-4308-b796-155234b0a811\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.905626 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8qv5\" (UniqueName: \"kubernetes.io/projected/00f1b8ee-1760-4308-b796-155234b0a811-kube-api-access-g8qv5\") pod \"00f1b8ee-1760-4308-b796-155234b0a811\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.905696 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-temporary\") pod \"00f1b8ee-1760-4308-b796-155234b0a811\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.905754 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-config-data\") pod \"00f1b8ee-1760-4308-b796-155234b0a811\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.905787 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"00f1b8ee-1760-4308-b796-155234b0a811\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.905954 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ssh-key\") pod \"00f1b8ee-1760-4308-b796-155234b0a811\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.906008 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ca-certs\") pod \"00f1b8ee-1760-4308-b796-155234b0a811\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.906023 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config\") pod \"00f1b8ee-1760-4308-b796-155234b0a811\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.906044 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config-secret\") pod \"00f1b8ee-1760-4308-b796-155234b0a811\" (UID: \"00f1b8ee-1760-4308-b796-155234b0a811\") " Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.906851 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-config-data" (OuterVolumeSpecName: "config-data") pod "00f1b8ee-1760-4308-b796-155234b0a811" (UID: "00f1b8ee-1760-4308-b796-155234b0a811"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.906928 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "00f1b8ee-1760-4308-b796-155234b0a811" (UID: "00f1b8ee-1760-4308-b796-155234b0a811"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.912021 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "00f1b8ee-1760-4308-b796-155234b0a811" (UID: "00f1b8ee-1760-4308-b796-155234b0a811"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.914771 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f1b8ee-1760-4308-b796-155234b0a811-kube-api-access-g8qv5" (OuterVolumeSpecName: "kube-api-access-g8qv5") pod "00f1b8ee-1760-4308-b796-155234b0a811" (UID: "00f1b8ee-1760-4308-b796-155234b0a811"). InnerVolumeSpecName "kube-api-access-g8qv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.916049 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "00f1b8ee-1760-4308-b796-155234b0a811" (UID: "00f1b8ee-1760-4308-b796-155234b0a811"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.935865 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "00f1b8ee-1760-4308-b796-155234b0a811" (UID: "00f1b8ee-1760-4308-b796-155234b0a811"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.936211 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "00f1b8ee-1760-4308-b796-155234b0a811" (UID: "00f1b8ee-1760-4308-b796-155234b0a811"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.939360 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "00f1b8ee-1760-4308-b796-155234b0a811" (UID: "00f1b8ee-1760-4308-b796-155234b0a811"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:44:33 crc kubenswrapper[4717]: I0218 12:44:33.959247 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "00f1b8ee-1760-4308-b796-155234b0a811" (UID: "00f1b8ee-1760-4308-b796-155234b0a811"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.011566 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.011647 4717 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.011660 4717 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.011688 4717 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.011698 4717 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.011709 4717 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/00f1b8ee-1760-4308-b796-155234b0a811-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.011720 4717 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.011731 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8qv5\" (UniqueName: \"kubernetes.io/projected/00f1b8ee-1760-4308-b796-155234b0a811-kube-api-access-g8qv5\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.011741 4717 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/00f1b8ee-1760-4308-b796-155234b0a811-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.031448 4717 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.114349 4717 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.343132 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"00f1b8ee-1760-4308-b796-155234b0a811","Type":"ContainerDied","Data":"81bf80aa58bb47082cd79efc8d46b8440d16157611b2d368ee91279e0e4f41e6"} Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.343177 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81bf80aa58bb47082cd79efc8d46b8440d16157611b2d368ee91279e0e4f41e6" Feb 18 12:44:34 crc kubenswrapper[4717]: I0218 12:44:34.343186 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.238317 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gqg9q"] Feb 18 12:44:38 crc kubenswrapper[4717]: E0218 12:44:38.239706 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f1b8ee-1760-4308-b796-155234b0a811" containerName="tempest-tests-tempest-tests-runner" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.239724 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f1b8ee-1760-4308-b796-155234b0a811" containerName="tempest-tests-tempest-tests-runner" Feb 18 12:44:38 crc kubenswrapper[4717]: E0218 12:44:38.239754 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerName="registry-server" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.239763 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerName="registry-server" Feb 18 12:44:38 crc kubenswrapper[4717]: E0218 12:44:38.239796 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerName="extract-content" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.239804 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerName="extract-content" Feb 18 12:44:38 crc kubenswrapper[4717]: E0218 12:44:38.239831 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerName="extract-utilities" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.239839 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerName="extract-utilities" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.240113 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f1b8ee-1760-4308-b796-155234b0a811" containerName="tempest-tests-tempest-tests-runner" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.240149 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c91e056-1a1c-4444-bb0a-7557342ee962" containerName="registry-server" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.241990 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.255606 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gqg9q"] Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.418042 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-utilities\") pod \"community-operators-gqg9q\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.418123 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-catalog-content\") pod \"community-operators-gqg9q\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.418206 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsbrl\" (UniqueName: \"kubernetes.io/projected/c71f5d15-6577-4632-9c69-0cb48f3448e3-kube-api-access-rsbrl\") pod \"community-operators-gqg9q\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.520970 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-utilities\") pod \"community-operators-gqg9q\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.521058 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-catalog-content\") pod \"community-operators-gqg9q\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.521200 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsbrl\" (UniqueName: \"kubernetes.io/projected/c71f5d15-6577-4632-9c69-0cb48f3448e3-kube-api-access-rsbrl\") pod \"community-operators-gqg9q\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.521909 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-utilities\") pod \"community-operators-gqg9q\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.522019 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-catalog-content\") pod \"community-operators-gqg9q\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.547297 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsbrl\" (UniqueName: \"kubernetes.io/projected/c71f5d15-6577-4632-9c69-0cb48f3448e3-kube-api-access-rsbrl\") pod \"community-operators-gqg9q\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:38 crc kubenswrapper[4717]: I0218 12:44:38.563389 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:39 crc kubenswrapper[4717]: I0218 12:44:39.133527 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gqg9q"] Feb 18 12:44:39 crc kubenswrapper[4717]: I0218 12:44:39.401107 4717 generic.go:334] "Generic (PLEG): container finished" podID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerID="f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808" exitCode=0 Feb 18 12:44:39 crc kubenswrapper[4717]: I0218 12:44:39.401502 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqg9q" event={"ID":"c71f5d15-6577-4632-9c69-0cb48f3448e3","Type":"ContainerDied","Data":"f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808"} Feb 18 12:44:39 crc kubenswrapper[4717]: I0218 12:44:39.401565 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqg9q" event={"ID":"c71f5d15-6577-4632-9c69-0cb48f3448e3","Type":"ContainerStarted","Data":"5a85704f8e0699aeb21898f1a48cd0ce7d99b1cf4d266b6f2f18ab735bbebe18"} Feb 18 12:44:39 crc kubenswrapper[4717]: I0218 12:44:39.404390 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:44:41 crc kubenswrapper[4717]: I0218 12:44:41.037214 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:44:41 crc kubenswrapper[4717]: E0218 12:44:41.037926 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:44:42 crc kubenswrapper[4717]: I0218 12:44:42.429840 4717 generic.go:334] "Generic (PLEG): container finished" podID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerID="5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565" exitCode=0 Feb 18 12:44:42 crc kubenswrapper[4717]: I0218 12:44:42.429916 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqg9q" event={"ID":"c71f5d15-6577-4632-9c69-0cb48f3448e3","Type":"ContainerDied","Data":"5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565"} Feb 18 12:44:43 crc kubenswrapper[4717]: I0218 12:44:43.443405 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqg9q" event={"ID":"c71f5d15-6577-4632-9c69-0cb48f3448e3","Type":"ContainerStarted","Data":"5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5"} Feb 18 12:44:43 crc kubenswrapper[4717]: I0218 12:44:43.472773 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gqg9q" podStartSLOduration=1.916966316 podStartE2EDuration="5.47275091s" podCreationTimestamp="2026-02-18 12:44:38 +0000 UTC" firstStartedPulling="2026-02-18 12:44:39.403318078 +0000 UTC m=+3313.805419394" lastFinishedPulling="2026-02-18 12:44:42.959102672 +0000 UTC m=+3317.361203988" observedRunningTime="2026-02-18 12:44:43.464836712 +0000 UTC m=+3317.866938028" watchObservedRunningTime="2026-02-18 12:44:43.47275091 +0000 UTC m=+3317.874852226" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.014495 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.017039 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.020505 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-8snck" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.024960 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.152878 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zcxn\" (UniqueName: \"kubernetes.io/projected/67131484-bd44-40b0-92da-d06886a8179b-kube-api-access-5zcxn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"67131484-bd44-40b0-92da-d06886a8179b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.153096 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"67131484-bd44-40b0-92da-d06886a8179b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.255751 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zcxn\" (UniqueName: \"kubernetes.io/projected/67131484-bd44-40b0-92da-d06886a8179b-kube-api-access-5zcxn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"67131484-bd44-40b0-92da-d06886a8179b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.255849 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"67131484-bd44-40b0-92da-d06886a8179b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.256425 4717 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"67131484-bd44-40b0-92da-d06886a8179b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.280686 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zcxn\" (UniqueName: \"kubernetes.io/projected/67131484-bd44-40b0-92da-d06886a8179b-kube-api-access-5zcxn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"67131484-bd44-40b0-92da-d06886a8179b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.285537 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"67131484-bd44-40b0-92da-d06886a8179b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.354221 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 12:44:45 crc kubenswrapper[4717]: I0218 12:44:45.959790 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 12:44:46 crc kubenswrapper[4717]: I0218 12:44:46.510150 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"67131484-bd44-40b0-92da-d06886a8179b","Type":"ContainerStarted","Data":"7c2f8f6903fdbdb2d53db5dc1022d3c3e3d42aefb34f57117dca0a2e2dbfcc3e"} Feb 18 12:44:47 crc kubenswrapper[4717]: I0218 12:44:47.522243 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"67131484-bd44-40b0-92da-d06886a8179b","Type":"ContainerStarted","Data":"8ec8fafa3560ebfffa9ef8e5b4a34ee9c1cf64819f120bd993bdea6c7a161b39"} Feb 18 12:44:47 crc kubenswrapper[4717]: I0218 12:44:47.537051 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.727364998 podStartE2EDuration="3.537025143s" podCreationTimestamp="2026-02-18 12:44:44 +0000 UTC" firstStartedPulling="2026-02-18 12:44:45.965149169 +0000 UTC m=+3320.367250485" lastFinishedPulling="2026-02-18 12:44:46.774809314 +0000 UTC m=+3321.176910630" observedRunningTime="2026-02-18 12:44:47.536007384 +0000 UTC m=+3321.938108710" watchObservedRunningTime="2026-02-18 12:44:47.537025143 +0000 UTC m=+3321.939126449" Feb 18 12:44:48 crc kubenswrapper[4717]: I0218 12:44:48.564868 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:48 crc kubenswrapper[4717]: I0218 12:44:48.564927 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:48 crc kubenswrapper[4717]: I0218 12:44:48.629900 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:49 crc kubenswrapper[4717]: I0218 12:44:49.587669 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:49 crc kubenswrapper[4717]: I0218 12:44:49.647377 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gqg9q"] Feb 18 12:44:51 crc kubenswrapper[4717]: I0218 12:44:51.557212 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gqg9q" podUID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerName="registry-server" containerID="cri-o://5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5" gracePeriod=2 Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.037188 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.070040 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-utilities\") pod \"c71f5d15-6577-4632-9c69-0cb48f3448e3\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.073481 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-utilities" (OuterVolumeSpecName: "utilities") pod "c71f5d15-6577-4632-9c69-0cb48f3448e3" (UID: "c71f5d15-6577-4632-9c69-0cb48f3448e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.171404 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-catalog-content\") pod \"c71f5d15-6577-4632-9c69-0cb48f3448e3\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.171628 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsbrl\" (UniqueName: \"kubernetes.io/projected/c71f5d15-6577-4632-9c69-0cb48f3448e3-kube-api-access-rsbrl\") pod \"c71f5d15-6577-4632-9c69-0cb48f3448e3\" (UID: \"c71f5d15-6577-4632-9c69-0cb48f3448e3\") " Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.172136 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.178509 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71f5d15-6577-4632-9c69-0cb48f3448e3-kube-api-access-rsbrl" (OuterVolumeSpecName: "kube-api-access-rsbrl") pod "c71f5d15-6577-4632-9c69-0cb48f3448e3" (UID: "c71f5d15-6577-4632-9c69-0cb48f3448e3"). InnerVolumeSpecName "kube-api-access-rsbrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.237807 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c71f5d15-6577-4632-9c69-0cb48f3448e3" (UID: "c71f5d15-6577-4632-9c69-0cb48f3448e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.274358 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c71f5d15-6577-4632-9c69-0cb48f3448e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.274403 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsbrl\" (UniqueName: \"kubernetes.io/projected/c71f5d15-6577-4632-9c69-0cb48f3448e3-kube-api-access-rsbrl\") on node \"crc\" DevicePath \"\"" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.569288 4717 generic.go:334] "Generic (PLEG): container finished" podID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerID="5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5" exitCode=0 Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.569341 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqg9q" event={"ID":"c71f5d15-6577-4632-9c69-0cb48f3448e3","Type":"ContainerDied","Data":"5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5"} Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.569385 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gqg9q" event={"ID":"c71f5d15-6577-4632-9c69-0cb48f3448e3","Type":"ContainerDied","Data":"5a85704f8e0699aeb21898f1a48cd0ce7d99b1cf4d266b6f2f18ab735bbebe18"} Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.569393 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gqg9q" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.569411 4717 scope.go:117] "RemoveContainer" containerID="5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.592718 4717 scope.go:117] "RemoveContainer" containerID="5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.615413 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gqg9q"] Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.624877 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gqg9q"] Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.636100 4717 scope.go:117] "RemoveContainer" containerID="f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.669015 4717 scope.go:117] "RemoveContainer" containerID="5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5" Feb 18 12:44:52 crc kubenswrapper[4717]: E0218 12:44:52.671738 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5\": container with ID starting with 5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5 not found: ID does not exist" containerID="5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.671786 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5"} err="failed to get container status \"5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5\": rpc error: code = NotFound desc = could not find container \"5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5\": container with ID starting with 5ada19d82adb5aadc487af9043177872ac80d79a56f4fd441432f9737b3046b5 not found: ID does not exist" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.671819 4717 scope.go:117] "RemoveContainer" containerID="5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565" Feb 18 12:44:52 crc kubenswrapper[4717]: E0218 12:44:52.672790 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565\": container with ID starting with 5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565 not found: ID does not exist" containerID="5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.672822 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565"} err="failed to get container status \"5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565\": rpc error: code = NotFound desc = could not find container \"5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565\": container with ID starting with 5ff93639694c66f520d645ed57b0fb28799e81053fc34a5de39959c6fde30565 not found: ID does not exist" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.672842 4717 scope.go:117] "RemoveContainer" containerID="f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808" Feb 18 12:44:52 crc kubenswrapper[4717]: E0218 12:44:52.673418 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808\": container with ID starting with f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808 not found: ID does not exist" containerID="f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808" Feb 18 12:44:52 crc kubenswrapper[4717]: I0218 12:44:52.673478 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808"} err="failed to get container status \"f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808\": rpc error: code = NotFound desc = could not find container \"f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808\": container with ID starting with f39f848f4093d7aed02ff63eec508a8a109db686178def8de214f051e374d808 not found: ID does not exist" Feb 18 12:44:53 crc kubenswrapper[4717]: I0218 12:44:53.037307 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:44:53 crc kubenswrapper[4717]: E0218 12:44:53.037907 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:44:53 crc kubenswrapper[4717]: I0218 12:44:53.050537 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c71f5d15-6577-4632-9c69-0cb48f3448e3" path="/var/lib/kubelet/pods/c71f5d15-6577-4632-9c69-0cb48f3448e3/volumes" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.155276 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg"] Feb 18 12:45:00 crc kubenswrapper[4717]: E0218 12:45:00.156602 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerName="registry-server" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.156625 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerName="registry-server" Feb 18 12:45:00 crc kubenswrapper[4717]: E0218 12:45:00.156635 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerName="extract-utilities" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.156648 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerName="extract-utilities" Feb 18 12:45:00 crc kubenswrapper[4717]: E0218 12:45:00.156675 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerName="extract-content" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.156682 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerName="extract-content" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.156924 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71f5d15-6577-4632-9c69-0cb48f3448e3" containerName="registry-server" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.157714 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.161893 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.162249 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.173065 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg"] Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.254692 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpdkg\" (UniqueName: \"kubernetes.io/projected/357b5bce-bcec-40c7-bd96-9ffe134e5915-kube-api-access-mpdkg\") pod \"collect-profiles-29523645-cdrwg\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.255202 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357b5bce-bcec-40c7-bd96-9ffe134e5915-config-volume\") pod \"collect-profiles-29523645-cdrwg\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.255710 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/357b5bce-bcec-40c7-bd96-9ffe134e5915-secret-volume\") pod \"collect-profiles-29523645-cdrwg\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.357425 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpdkg\" (UniqueName: \"kubernetes.io/projected/357b5bce-bcec-40c7-bd96-9ffe134e5915-kube-api-access-mpdkg\") pod \"collect-profiles-29523645-cdrwg\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.357583 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357b5bce-bcec-40c7-bd96-9ffe134e5915-config-volume\") pod \"collect-profiles-29523645-cdrwg\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.357639 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/357b5bce-bcec-40c7-bd96-9ffe134e5915-secret-volume\") pod \"collect-profiles-29523645-cdrwg\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.358791 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357b5bce-bcec-40c7-bd96-9ffe134e5915-config-volume\") pod \"collect-profiles-29523645-cdrwg\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.376935 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpdkg\" (UniqueName: \"kubernetes.io/projected/357b5bce-bcec-40c7-bd96-9ffe134e5915-kube-api-access-mpdkg\") pod \"collect-profiles-29523645-cdrwg\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.377841 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/357b5bce-bcec-40c7-bd96-9ffe134e5915-secret-volume\") pod \"collect-profiles-29523645-cdrwg\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.488895 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:00 crc kubenswrapper[4717]: I0218 12:45:00.941043 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg"] Feb 18 12:45:01 crc kubenswrapper[4717]: I0218 12:45:01.655724 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" event={"ID":"357b5bce-bcec-40c7-bd96-9ffe134e5915","Type":"ContainerStarted","Data":"9b464cef36123f72bfc6d309650f7d3095667ab3ec99ff857ca19162c6acfc3f"} Feb 18 12:45:01 crc kubenswrapper[4717]: I0218 12:45:01.656027 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" event={"ID":"357b5bce-bcec-40c7-bd96-9ffe134e5915","Type":"ContainerStarted","Data":"c1f9fa158051486b4847dedf3fdafcaee69ee6ad5c7603b6711c45a041700b22"} Feb 18 12:45:02 crc kubenswrapper[4717]: I0218 12:45:02.666521 4717 generic.go:334] "Generic (PLEG): container finished" podID="357b5bce-bcec-40c7-bd96-9ffe134e5915" containerID="9b464cef36123f72bfc6d309650f7d3095667ab3ec99ff857ca19162c6acfc3f" exitCode=0 Feb 18 12:45:02 crc kubenswrapper[4717]: I0218 12:45:02.667052 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" event={"ID":"357b5bce-bcec-40c7-bd96-9ffe134e5915","Type":"ContainerDied","Data":"9b464cef36123f72bfc6d309650f7d3095667ab3ec99ff857ca19162c6acfc3f"} Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.037837 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.217206 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357b5bce-bcec-40c7-bd96-9ffe134e5915-config-volume\") pod \"357b5bce-bcec-40c7-bd96-9ffe134e5915\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.217386 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpdkg\" (UniqueName: \"kubernetes.io/projected/357b5bce-bcec-40c7-bd96-9ffe134e5915-kube-api-access-mpdkg\") pod \"357b5bce-bcec-40c7-bd96-9ffe134e5915\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.217585 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/357b5bce-bcec-40c7-bd96-9ffe134e5915-secret-volume\") pod \"357b5bce-bcec-40c7-bd96-9ffe134e5915\" (UID: \"357b5bce-bcec-40c7-bd96-9ffe134e5915\") " Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.218424 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/357b5bce-bcec-40c7-bd96-9ffe134e5915-config-volume" (OuterVolumeSpecName: "config-volume") pod "357b5bce-bcec-40c7-bd96-9ffe134e5915" (UID: "357b5bce-bcec-40c7-bd96-9ffe134e5915"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.224624 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/357b5bce-bcec-40c7-bd96-9ffe134e5915-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "357b5bce-bcec-40c7-bd96-9ffe134e5915" (UID: "357b5bce-bcec-40c7-bd96-9ffe134e5915"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.224633 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357b5bce-bcec-40c7-bd96-9ffe134e5915-kube-api-access-mpdkg" (OuterVolumeSpecName: "kube-api-access-mpdkg") pod "357b5bce-bcec-40c7-bd96-9ffe134e5915" (UID: "357b5bce-bcec-40c7-bd96-9ffe134e5915"). InnerVolumeSpecName "kube-api-access-mpdkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.321016 4717 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/357b5bce-bcec-40c7-bd96-9ffe134e5915-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.321076 4717 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357b5bce-bcec-40c7-bd96-9ffe134e5915-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.321099 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpdkg\" (UniqueName: \"kubernetes.io/projected/357b5bce-bcec-40c7-bd96-9ffe134e5915-kube-api-access-mpdkg\") on node \"crc\" DevicePath \"\"" Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.678380 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" event={"ID":"357b5bce-bcec-40c7-bd96-9ffe134e5915","Type":"ContainerDied","Data":"c1f9fa158051486b4847dedf3fdafcaee69ee6ad5c7603b6711c45a041700b22"} Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.678732 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f9fa158051486b4847dedf3fdafcaee69ee6ad5c7603b6711c45a041700b22" Feb 18 12:45:03 crc kubenswrapper[4717]: I0218 12:45:03.678395 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523645-cdrwg" Feb 18 12:45:04 crc kubenswrapper[4717]: I0218 12:45:04.037622 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:45:04 crc kubenswrapper[4717]: E0218 12:45:04.038003 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:45:04 crc kubenswrapper[4717]: I0218 12:45:04.128533 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk"] Feb 18 12:45:04 crc kubenswrapper[4717]: I0218 12:45:04.137724 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523600-fqhzk"] Feb 18 12:45:05 crc kubenswrapper[4717]: I0218 12:45:05.057507 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3dc2ae-655b-4dfd-a294-22d48dce0867" path="/var/lib/kubelet/pods/8f3dc2ae-655b-4dfd-a294-22d48dce0867/volumes" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.547358 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4xhlg/must-gather-rclqm"] Feb 18 12:45:08 crc kubenswrapper[4717]: E0218 12:45:08.548250 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357b5bce-bcec-40c7-bd96-9ffe134e5915" containerName="collect-profiles" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.548288 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="357b5bce-bcec-40c7-bd96-9ffe134e5915" containerName="collect-profiles" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.548636 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="357b5bce-bcec-40c7-bd96-9ffe134e5915" containerName="collect-profiles" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.550087 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/must-gather-rclqm" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.553371 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4xhlg"/"openshift-service-ca.crt" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.553630 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4xhlg"/"kube-root-ca.crt" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.559974 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4xhlg"/"default-dockercfg-cs626" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.560571 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4xhlg/must-gather-rclqm"] Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.646352 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcgtf\" (UniqueName: \"kubernetes.io/projected/8babfa5e-44d2-4766-976c-54881af09657-kube-api-access-bcgtf\") pod \"must-gather-rclqm\" (UID: \"8babfa5e-44d2-4766-976c-54881af09657\") " pod="openshift-must-gather-4xhlg/must-gather-rclqm" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.646422 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8babfa5e-44d2-4766-976c-54881af09657-must-gather-output\") pod \"must-gather-rclqm\" (UID: \"8babfa5e-44d2-4766-976c-54881af09657\") " pod="openshift-must-gather-4xhlg/must-gather-rclqm" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.748608 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcgtf\" (UniqueName: \"kubernetes.io/projected/8babfa5e-44d2-4766-976c-54881af09657-kube-api-access-bcgtf\") pod \"must-gather-rclqm\" (UID: \"8babfa5e-44d2-4766-976c-54881af09657\") " pod="openshift-must-gather-4xhlg/must-gather-rclqm" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.748666 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8babfa5e-44d2-4766-976c-54881af09657-must-gather-output\") pod \"must-gather-rclqm\" (UID: \"8babfa5e-44d2-4766-976c-54881af09657\") " pod="openshift-must-gather-4xhlg/must-gather-rclqm" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.749069 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8babfa5e-44d2-4766-976c-54881af09657-must-gather-output\") pod \"must-gather-rclqm\" (UID: \"8babfa5e-44d2-4766-976c-54881af09657\") " pod="openshift-must-gather-4xhlg/must-gather-rclqm" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.773232 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcgtf\" (UniqueName: \"kubernetes.io/projected/8babfa5e-44d2-4766-976c-54881af09657-kube-api-access-bcgtf\") pod \"must-gather-rclqm\" (UID: \"8babfa5e-44d2-4766-976c-54881af09657\") " pod="openshift-must-gather-4xhlg/must-gather-rclqm" Feb 18 12:45:08 crc kubenswrapper[4717]: I0218 12:45:08.873836 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/must-gather-rclqm" Feb 18 12:45:09 crc kubenswrapper[4717]: I0218 12:45:09.329982 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4xhlg/must-gather-rclqm"] Feb 18 12:45:09 crc kubenswrapper[4717]: I0218 12:45:09.731512 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/must-gather-rclqm" event={"ID":"8babfa5e-44d2-4766-976c-54881af09657","Type":"ContainerStarted","Data":"67304b8c779ba42d18b0ca77629c5a9e2306fc584a83b8aa700dd7dd53c983bc"} Feb 18 12:45:16 crc kubenswrapper[4717]: I0218 12:45:16.037161 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:45:16 crc kubenswrapper[4717]: E0218 12:45:16.038011 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:45:20 crc kubenswrapper[4717]: I0218 12:45:20.170451 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/must-gather-rclqm" event={"ID":"8babfa5e-44d2-4766-976c-54881af09657","Type":"ContainerStarted","Data":"8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba"} Feb 18 12:45:20 crc kubenswrapper[4717]: I0218 12:45:20.171228 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/must-gather-rclqm" event={"ID":"8babfa5e-44d2-4766-976c-54881af09657","Type":"ContainerStarted","Data":"3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850"} Feb 18 12:45:20 crc kubenswrapper[4717]: I0218 12:45:20.192046 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4xhlg/must-gather-rclqm" podStartSLOduration=3.101722776 podStartE2EDuration="12.192027924s" podCreationTimestamp="2026-02-18 12:45:08 +0000 UTC" firstStartedPulling="2026-02-18 12:45:09.346190288 +0000 UTC m=+3343.748291604" lastFinishedPulling="2026-02-18 12:45:18.436495446 +0000 UTC m=+3352.838596752" observedRunningTime="2026-02-18 12:45:20.189712478 +0000 UTC m=+3354.591813814" watchObservedRunningTime="2026-02-18 12:45:20.192027924 +0000 UTC m=+3354.594129240" Feb 18 12:45:24 crc kubenswrapper[4717]: I0218 12:45:24.147873 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4xhlg/crc-debug-25msk"] Feb 18 12:45:24 crc kubenswrapper[4717]: I0218 12:45:24.149912 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-25msk" Feb 18 12:45:24 crc kubenswrapper[4717]: I0218 12:45:24.247922 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d30da568-9115-4cb4-b66b-52194bf653ae-host\") pod \"crc-debug-25msk\" (UID: \"d30da568-9115-4cb4-b66b-52194bf653ae\") " pod="openshift-must-gather-4xhlg/crc-debug-25msk" Feb 18 12:45:24 crc kubenswrapper[4717]: I0218 12:45:24.248414 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcnnh\" (UniqueName: \"kubernetes.io/projected/d30da568-9115-4cb4-b66b-52194bf653ae-kube-api-access-qcnnh\") pod \"crc-debug-25msk\" (UID: \"d30da568-9115-4cb4-b66b-52194bf653ae\") " pod="openshift-must-gather-4xhlg/crc-debug-25msk" Feb 18 12:45:24 crc kubenswrapper[4717]: I0218 12:45:24.351277 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d30da568-9115-4cb4-b66b-52194bf653ae-host\") pod \"crc-debug-25msk\" (UID: \"d30da568-9115-4cb4-b66b-52194bf653ae\") " pod="openshift-must-gather-4xhlg/crc-debug-25msk" Feb 18 12:45:24 crc kubenswrapper[4717]: I0218 12:45:24.351487 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcnnh\" (UniqueName: \"kubernetes.io/projected/d30da568-9115-4cb4-b66b-52194bf653ae-kube-api-access-qcnnh\") pod \"crc-debug-25msk\" (UID: \"d30da568-9115-4cb4-b66b-52194bf653ae\") " pod="openshift-must-gather-4xhlg/crc-debug-25msk" Feb 18 12:45:24 crc kubenswrapper[4717]: I0218 12:45:24.351494 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d30da568-9115-4cb4-b66b-52194bf653ae-host\") pod \"crc-debug-25msk\" (UID: \"d30da568-9115-4cb4-b66b-52194bf653ae\") " pod="openshift-must-gather-4xhlg/crc-debug-25msk" Feb 18 12:45:24 crc kubenswrapper[4717]: I0218 12:45:24.402028 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcnnh\" (UniqueName: \"kubernetes.io/projected/d30da568-9115-4cb4-b66b-52194bf653ae-kube-api-access-qcnnh\") pod \"crc-debug-25msk\" (UID: \"d30da568-9115-4cb4-b66b-52194bf653ae\") " pod="openshift-must-gather-4xhlg/crc-debug-25msk" Feb 18 12:45:24 crc kubenswrapper[4717]: I0218 12:45:24.471306 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-25msk" Feb 18 12:45:25 crc kubenswrapper[4717]: I0218 12:45:25.214346 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/crc-debug-25msk" event={"ID":"d30da568-9115-4cb4-b66b-52194bf653ae","Type":"ContainerStarted","Data":"2b907112e8f3706c01fc6233d57debf23839e38085110d7cbea967ca82cbcefb"} Feb 18 12:45:30 crc kubenswrapper[4717]: I0218 12:45:30.036774 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:45:30 crc kubenswrapper[4717]: E0218 12:45:30.037669 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.471009 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9qfgx"] Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.473586 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.504022 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qfgx"] Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.628630 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8986e8c3-7ce9-40ca-94dd-8258ee800dc3-catalog-content\") pod \"redhat-marketplace-9qfgx\" (UID: \"8986e8c3-7ce9-40ca-94dd-8258ee800dc3\") " pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.628805 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67kz\" (UniqueName: \"kubernetes.io/projected/8986e8c3-7ce9-40ca-94dd-8258ee800dc3-kube-api-access-f67kz\") pod \"redhat-marketplace-9qfgx\" (UID: \"8986e8c3-7ce9-40ca-94dd-8258ee800dc3\") " pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.628885 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8986e8c3-7ce9-40ca-94dd-8258ee800dc3-utilities\") pod \"redhat-marketplace-9qfgx\" (UID: \"8986e8c3-7ce9-40ca-94dd-8258ee800dc3\") " pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.731692 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8986e8c3-7ce9-40ca-94dd-8258ee800dc3-catalog-content\") pod \"redhat-marketplace-9qfgx\" (UID: \"8986e8c3-7ce9-40ca-94dd-8258ee800dc3\") " pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.731821 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67kz\" (UniqueName: \"kubernetes.io/projected/8986e8c3-7ce9-40ca-94dd-8258ee800dc3-kube-api-access-f67kz\") pod \"redhat-marketplace-9qfgx\" (UID: \"8986e8c3-7ce9-40ca-94dd-8258ee800dc3\") " pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.731851 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8986e8c3-7ce9-40ca-94dd-8258ee800dc3-utilities\") pod \"redhat-marketplace-9qfgx\" (UID: \"8986e8c3-7ce9-40ca-94dd-8258ee800dc3\") " pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.732179 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8986e8c3-7ce9-40ca-94dd-8258ee800dc3-catalog-content\") pod \"redhat-marketplace-9qfgx\" (UID: \"8986e8c3-7ce9-40ca-94dd-8258ee800dc3\") " pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.732297 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8986e8c3-7ce9-40ca-94dd-8258ee800dc3-utilities\") pod \"redhat-marketplace-9qfgx\" (UID: \"8986e8c3-7ce9-40ca-94dd-8258ee800dc3\") " pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.751057 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67kz\" (UniqueName: \"kubernetes.io/projected/8986e8c3-7ce9-40ca-94dd-8258ee800dc3-kube-api-access-f67kz\") pod \"redhat-marketplace-9qfgx\" (UID: \"8986e8c3-7ce9-40ca-94dd-8258ee800dc3\") " pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:32 crc kubenswrapper[4717]: I0218 12:45:32.808977 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:37 crc kubenswrapper[4717]: W0218 12:45:37.478479 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8986e8c3_7ce9_40ca_94dd_8258ee800dc3.slice/crio-40f26f601e89aa8005720259ee699fbddba2782219fe875456e44a5b4fc941e6 WatchSource:0}: Error finding container 40f26f601e89aa8005720259ee699fbddba2782219fe875456e44a5b4fc941e6: Status 404 returned error can't find the container with id 40f26f601e89aa8005720259ee699fbddba2782219fe875456e44a5b4fc941e6 Feb 18 12:45:37 crc kubenswrapper[4717]: I0218 12:45:37.484226 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qfgx"] Feb 18 12:45:38 crc kubenswrapper[4717]: I0218 12:45:38.364416 4717 generic.go:334] "Generic (PLEG): container finished" podID="8986e8c3-7ce9-40ca-94dd-8258ee800dc3" containerID="d0136eae82f83e95bf8083104fdcd0416d665c5ee9efb3dab849a42a53bc94c5" exitCode=0 Feb 18 12:45:38 crc kubenswrapper[4717]: I0218 12:45:38.364658 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qfgx" event={"ID":"8986e8c3-7ce9-40ca-94dd-8258ee800dc3","Type":"ContainerDied","Data":"d0136eae82f83e95bf8083104fdcd0416d665c5ee9efb3dab849a42a53bc94c5"} Feb 18 12:45:38 crc kubenswrapper[4717]: I0218 12:45:38.365044 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qfgx" event={"ID":"8986e8c3-7ce9-40ca-94dd-8258ee800dc3","Type":"ContainerStarted","Data":"40f26f601e89aa8005720259ee699fbddba2782219fe875456e44a5b4fc941e6"} Feb 18 12:45:38 crc kubenswrapper[4717]: I0218 12:45:38.372301 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/crc-debug-25msk" event={"ID":"d30da568-9115-4cb4-b66b-52194bf653ae","Type":"ContainerStarted","Data":"5d17df50486739edc52921fc6cd536bde9698b1401b485fbf4e1283df08184c2"} Feb 18 12:45:38 crc kubenswrapper[4717]: I0218 12:45:38.435172 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4xhlg/crc-debug-25msk" podStartSLOduration=1.626408487 podStartE2EDuration="14.435141998s" podCreationTimestamp="2026-02-18 12:45:24 +0000 UTC" firstStartedPulling="2026-02-18 12:45:24.501869503 +0000 UTC m=+3358.903970819" lastFinishedPulling="2026-02-18 12:45:37.310603014 +0000 UTC m=+3371.712704330" observedRunningTime="2026-02-18 12:45:38.418582242 +0000 UTC m=+3372.820683558" watchObservedRunningTime="2026-02-18 12:45:38.435141998 +0000 UTC m=+3372.837243314" Feb 18 12:45:41 crc kubenswrapper[4717]: I0218 12:45:41.398899 4717 scope.go:117] "RemoveContainer" containerID="cf932904573e6b18d4e98e1748eec0f1b21b3568ff79037607e977728c8e8011" Feb 18 12:45:44 crc kubenswrapper[4717]: I0218 12:45:44.036468 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:45:44 crc kubenswrapper[4717]: E0218 12:45:44.037146 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:45:44 crc kubenswrapper[4717]: I0218 12:45:44.442669 4717 generic.go:334] "Generic (PLEG): container finished" podID="8986e8c3-7ce9-40ca-94dd-8258ee800dc3" containerID="c32d17cfe54f6afb76413846fa8fcef80916d1c348fe0a2003019259327d3b77" exitCode=0 Feb 18 12:45:44 crc kubenswrapper[4717]: I0218 12:45:44.442740 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qfgx" event={"ID":"8986e8c3-7ce9-40ca-94dd-8258ee800dc3","Type":"ContainerDied","Data":"c32d17cfe54f6afb76413846fa8fcef80916d1c348fe0a2003019259327d3b77"} Feb 18 12:45:45 crc kubenswrapper[4717]: I0218 12:45:45.459522 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9qfgx" event={"ID":"8986e8c3-7ce9-40ca-94dd-8258ee800dc3","Type":"ContainerStarted","Data":"6db3e5d056d5c517bf6aeb96015aa22a643f1b1da0a59d84f3def50d07eb7496"} Feb 18 12:45:45 crc kubenswrapper[4717]: I0218 12:45:45.484599 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9qfgx" podStartSLOduration=6.82485755 podStartE2EDuration="13.484573848s" podCreationTimestamp="2026-02-18 12:45:32 +0000 UTC" firstStartedPulling="2026-02-18 12:45:38.367239584 +0000 UTC m=+3372.769340900" lastFinishedPulling="2026-02-18 12:45:45.026955882 +0000 UTC m=+3379.429057198" observedRunningTime="2026-02-18 12:45:45.479322247 +0000 UTC m=+3379.881423563" watchObservedRunningTime="2026-02-18 12:45:45.484573848 +0000 UTC m=+3379.886675164" Feb 18 12:45:52 crc kubenswrapper[4717]: I0218 12:45:52.809876 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:52 crc kubenswrapper[4717]: I0218 12:45:52.810508 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:52 crc kubenswrapper[4717]: I0218 12:45:52.868158 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:53 crc kubenswrapper[4717]: I0218 12:45:53.596524 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9qfgx" Feb 18 12:45:53 crc kubenswrapper[4717]: I0218 12:45:53.665984 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9qfgx"] Feb 18 12:45:53 crc kubenswrapper[4717]: I0218 12:45:53.710680 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-82l5m"] Feb 18 12:45:53 crc kubenswrapper[4717]: I0218 12:45:53.710993 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-82l5m" podUID="b05a9e06-924f-407e-a7f8-01b14310f300" containerName="registry-server" containerID="cri-o://de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53" gracePeriod=2 Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.233961 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.237692 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm5jr\" (UniqueName: \"kubernetes.io/projected/b05a9e06-924f-407e-a7f8-01b14310f300-kube-api-access-zm5jr\") pod \"b05a9e06-924f-407e-a7f8-01b14310f300\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.237748 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-utilities\") pod \"b05a9e06-924f-407e-a7f8-01b14310f300\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.237832 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-catalog-content\") pod \"b05a9e06-924f-407e-a7f8-01b14310f300\" (UID: \"b05a9e06-924f-407e-a7f8-01b14310f300\") " Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.239038 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-utilities" (OuterVolumeSpecName: "utilities") pod "b05a9e06-924f-407e-a7f8-01b14310f300" (UID: "b05a9e06-924f-407e-a7f8-01b14310f300"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.247105 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a9e06-924f-407e-a7f8-01b14310f300-kube-api-access-zm5jr" (OuterVolumeSpecName: "kube-api-access-zm5jr") pod "b05a9e06-924f-407e-a7f8-01b14310f300" (UID: "b05a9e06-924f-407e-a7f8-01b14310f300"). InnerVolumeSpecName "kube-api-access-zm5jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.281119 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b05a9e06-924f-407e-a7f8-01b14310f300" (UID: "b05a9e06-924f-407e-a7f8-01b14310f300"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.356755 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm5jr\" (UniqueName: \"kubernetes.io/projected/b05a9e06-924f-407e-a7f8-01b14310f300-kube-api-access-zm5jr\") on node \"crc\" DevicePath \"\"" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.356807 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.356830 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05a9e06-924f-407e-a7f8-01b14310f300-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.544936 4717 generic.go:334] "Generic (PLEG): container finished" podID="b05a9e06-924f-407e-a7f8-01b14310f300" containerID="de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53" exitCode=0 Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.545013 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82l5m" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.545033 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82l5m" event={"ID":"b05a9e06-924f-407e-a7f8-01b14310f300","Type":"ContainerDied","Data":"de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53"} Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.545087 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82l5m" event={"ID":"b05a9e06-924f-407e-a7f8-01b14310f300","Type":"ContainerDied","Data":"8616f73393e7d9c5d5f4e43bdaaabea9284664de5e441f7e9235cc0cbd6d51f6"} Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.545110 4717 scope.go:117] "RemoveContainer" containerID="de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.566686 4717 scope.go:117] "RemoveContainer" containerID="b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.593821 4717 scope.go:117] "RemoveContainer" containerID="bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.612740 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-82l5m"] Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.632579 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-82l5m"] Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.654922 4717 scope.go:117] "RemoveContainer" containerID="de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53" Feb 18 12:45:54 crc kubenswrapper[4717]: E0218 12:45:54.655465 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53\": container with ID starting with de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53 not found: ID does not exist" containerID="de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.655497 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53"} err="failed to get container status \"de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53\": rpc error: code = NotFound desc = could not find container \"de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53\": container with ID starting with de6b0c0eb702f62af838de2b8156969d0f562b1d937ced79fbf1cc4a43e46f53 not found: ID does not exist" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.655520 4717 scope.go:117] "RemoveContainer" containerID="b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003" Feb 18 12:45:54 crc kubenswrapper[4717]: E0218 12:45:54.656395 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003\": container with ID starting with b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003 not found: ID does not exist" containerID="b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.656421 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003"} err="failed to get container status \"b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003\": rpc error: code = NotFound desc = could not find container \"b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003\": container with ID starting with b8706e2f438368605bf6173c627ba4084c9f82cd70be690c0e215a7869c18003 not found: ID does not exist" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.656438 4717 scope.go:117] "RemoveContainer" containerID="bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535" Feb 18 12:45:54 crc kubenswrapper[4717]: E0218 12:45:54.660494 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535\": container with ID starting with bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535 not found: ID does not exist" containerID="bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535" Feb 18 12:45:54 crc kubenswrapper[4717]: I0218 12:45:54.660524 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535"} err="failed to get container status \"bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535\": rpc error: code = NotFound desc = could not find container \"bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535\": container with ID starting with bf2d82457eac1a06fa560ebc7d7f619a32295c5c0b326066e17018bdfa030535 not found: ID does not exist" Feb 18 12:45:55 crc kubenswrapper[4717]: I0218 12:45:55.060905 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a9e06-924f-407e-a7f8-01b14310f300" path="/var/lib/kubelet/pods/b05a9e06-924f-407e-a7f8-01b14310f300/volumes" Feb 18 12:45:56 crc kubenswrapper[4717]: I0218 12:45:56.036997 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:45:56 crc kubenswrapper[4717]: E0218 12:45:56.037679 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.074611 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-767x2"] Feb 18 12:46:05 crc kubenswrapper[4717]: E0218 12:46:05.075797 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05a9e06-924f-407e-a7f8-01b14310f300" containerName="extract-content" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.075816 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05a9e06-924f-407e-a7f8-01b14310f300" containerName="extract-content" Feb 18 12:46:05 crc kubenswrapper[4717]: E0218 12:46:05.075836 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05a9e06-924f-407e-a7f8-01b14310f300" containerName="extract-utilities" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.075844 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05a9e06-924f-407e-a7f8-01b14310f300" containerName="extract-utilities" Feb 18 12:46:05 crc kubenswrapper[4717]: E0218 12:46:05.075864 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05a9e06-924f-407e-a7f8-01b14310f300" containerName="registry-server" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.075872 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05a9e06-924f-407e-a7f8-01b14310f300" containerName="registry-server" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.076158 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05a9e06-924f-407e-a7f8-01b14310f300" containerName="registry-server" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.077875 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.090672 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-767x2"] Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.091194 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-catalog-content\") pod \"redhat-operators-767x2\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.091310 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77h4j\" (UniqueName: \"kubernetes.io/projected/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-kube-api-access-77h4j\") pod \"redhat-operators-767x2\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.091439 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-utilities\") pod \"redhat-operators-767x2\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.193550 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-catalog-content\") pod \"redhat-operators-767x2\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.193680 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77h4j\" (UniqueName: \"kubernetes.io/projected/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-kube-api-access-77h4j\") pod \"redhat-operators-767x2\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.193774 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-utilities\") pod \"redhat-operators-767x2\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.194181 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-catalog-content\") pod \"redhat-operators-767x2\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.194475 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-utilities\") pod \"redhat-operators-767x2\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.219389 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77h4j\" (UniqueName: \"kubernetes.io/projected/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-kube-api-access-77h4j\") pod \"redhat-operators-767x2\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:05 crc kubenswrapper[4717]: I0218 12:46:05.415524 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:06 crc kubenswrapper[4717]: I0218 12:46:06.076827 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-767x2"] Feb 18 12:46:06 crc kubenswrapper[4717]: W0218 12:46:06.102922 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecc0db9d_255e_4d75_80e2_bd01a1d41eec.slice/crio-d501790d8b6db60995d1b21373cbf3f0b59ca14e315ac2e24a173eddd68f7f75 WatchSource:0}: Error finding container d501790d8b6db60995d1b21373cbf3f0b59ca14e315ac2e24a173eddd68f7f75: Status 404 returned error can't find the container with id d501790d8b6db60995d1b21373cbf3f0b59ca14e315ac2e24a173eddd68f7f75 Feb 18 12:46:06 crc kubenswrapper[4717]: I0218 12:46:06.678306 4717 generic.go:334] "Generic (PLEG): container finished" podID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerID="9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4" exitCode=0 Feb 18 12:46:06 crc kubenswrapper[4717]: I0218 12:46:06.678546 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-767x2" event={"ID":"ecc0db9d-255e-4d75-80e2-bd01a1d41eec","Type":"ContainerDied","Data":"9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4"} Feb 18 12:46:06 crc kubenswrapper[4717]: I0218 12:46:06.678578 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-767x2" event={"ID":"ecc0db9d-255e-4d75-80e2-bd01a1d41eec","Type":"ContainerStarted","Data":"d501790d8b6db60995d1b21373cbf3f0b59ca14e315ac2e24a173eddd68f7f75"} Feb 18 12:46:08 crc kubenswrapper[4717]: I0218 12:46:08.698636 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-767x2" event={"ID":"ecc0db9d-255e-4d75-80e2-bd01a1d41eec","Type":"ContainerStarted","Data":"0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28"} Feb 18 12:46:10 crc kubenswrapper[4717]: I0218 12:46:10.037224 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:46:10 crc kubenswrapper[4717]: E0218 12:46:10.038033 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:46:13 crc kubenswrapper[4717]: I0218 12:46:13.750615 4717 generic.go:334] "Generic (PLEG): container finished" podID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerID="0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28" exitCode=0 Feb 18 12:46:13 crc kubenswrapper[4717]: I0218 12:46:13.750794 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-767x2" event={"ID":"ecc0db9d-255e-4d75-80e2-bd01a1d41eec","Type":"ContainerDied","Data":"0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28"} Feb 18 12:46:14 crc kubenswrapper[4717]: I0218 12:46:14.760899 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-767x2" event={"ID":"ecc0db9d-255e-4d75-80e2-bd01a1d41eec","Type":"ContainerStarted","Data":"8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533"} Feb 18 12:46:14 crc kubenswrapper[4717]: I0218 12:46:14.787673 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-767x2" podStartSLOduration=2.169883108 podStartE2EDuration="9.787624319s" podCreationTimestamp="2026-02-18 12:46:05 +0000 UTC" firstStartedPulling="2026-02-18 12:46:06.681068254 +0000 UTC m=+3401.083169570" lastFinishedPulling="2026-02-18 12:46:14.298809465 +0000 UTC m=+3408.700910781" observedRunningTime="2026-02-18 12:46:14.78488998 +0000 UTC m=+3409.186991296" watchObservedRunningTime="2026-02-18 12:46:14.787624319 +0000 UTC m=+3409.189725665" Feb 18 12:46:15 crc kubenswrapper[4717]: I0218 12:46:15.416608 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:15 crc kubenswrapper[4717]: I0218 12:46:15.416908 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:16 crc kubenswrapper[4717]: I0218 12:46:16.478884 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-767x2" podUID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerName="registry-server" probeResult="failure" output=< Feb 18 12:46:16 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 12:46:16 crc kubenswrapper[4717]: > Feb 18 12:46:22 crc kubenswrapper[4717]: I0218 12:46:22.036650 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:46:22 crc kubenswrapper[4717]: E0218 12:46:22.037567 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:46:25 crc kubenswrapper[4717]: I0218 12:46:25.490037 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:25 crc kubenswrapper[4717]: I0218 12:46:25.545834 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:25 crc kubenswrapper[4717]: I0218 12:46:25.739523 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-767x2"] Feb 18 12:46:25 crc kubenswrapper[4717]: I0218 12:46:25.868581 4717 generic.go:334] "Generic (PLEG): container finished" podID="d30da568-9115-4cb4-b66b-52194bf653ae" containerID="5d17df50486739edc52921fc6cd536bde9698b1401b485fbf4e1283df08184c2" exitCode=0 Feb 18 12:46:25 crc kubenswrapper[4717]: I0218 12:46:25.868667 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/crc-debug-25msk" event={"ID":"d30da568-9115-4cb4-b66b-52194bf653ae","Type":"ContainerDied","Data":"5d17df50486739edc52921fc6cd536bde9698b1401b485fbf4e1283df08184c2"} Feb 18 12:46:26 crc kubenswrapper[4717]: I0218 12:46:26.878724 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-767x2" podUID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerName="registry-server" containerID="cri-o://8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533" gracePeriod=2 Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.129771 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-25msk" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.186368 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4xhlg/crc-debug-25msk"] Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.202992 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4xhlg/crc-debug-25msk"] Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.295087 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcnnh\" (UniqueName: \"kubernetes.io/projected/d30da568-9115-4cb4-b66b-52194bf653ae-kube-api-access-qcnnh\") pod \"d30da568-9115-4cb4-b66b-52194bf653ae\" (UID: \"d30da568-9115-4cb4-b66b-52194bf653ae\") " Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.295366 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d30da568-9115-4cb4-b66b-52194bf653ae-host\") pod \"d30da568-9115-4cb4-b66b-52194bf653ae\" (UID: \"d30da568-9115-4cb4-b66b-52194bf653ae\") " Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.295896 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d30da568-9115-4cb4-b66b-52194bf653ae-host" (OuterVolumeSpecName: "host") pod "d30da568-9115-4cb4-b66b-52194bf653ae" (UID: "d30da568-9115-4cb4-b66b-52194bf653ae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.320530 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30da568-9115-4cb4-b66b-52194bf653ae-kube-api-access-qcnnh" (OuterVolumeSpecName: "kube-api-access-qcnnh") pod "d30da568-9115-4cb4-b66b-52194bf653ae" (UID: "d30da568-9115-4cb4-b66b-52194bf653ae"). InnerVolumeSpecName "kube-api-access-qcnnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.399699 4717 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d30da568-9115-4cb4-b66b-52194bf653ae-host\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.399751 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcnnh\" (UniqueName: \"kubernetes.io/projected/d30da568-9115-4cb4-b66b-52194bf653ae-kube-api-access-qcnnh\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.416633 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.500876 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77h4j\" (UniqueName: \"kubernetes.io/projected/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-kube-api-access-77h4j\") pod \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.500935 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-catalog-content\") pod \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.501112 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-utilities\") pod \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\" (UID: \"ecc0db9d-255e-4d75-80e2-bd01a1d41eec\") " Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.501969 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-utilities" (OuterVolumeSpecName: "utilities") pod "ecc0db9d-255e-4d75-80e2-bd01a1d41eec" (UID: "ecc0db9d-255e-4d75-80e2-bd01a1d41eec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.504610 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-kube-api-access-77h4j" (OuterVolumeSpecName: "kube-api-access-77h4j") pod "ecc0db9d-255e-4d75-80e2-bd01a1d41eec" (UID: "ecc0db9d-255e-4d75-80e2-bd01a1d41eec"). InnerVolumeSpecName "kube-api-access-77h4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.603870 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77h4j\" (UniqueName: \"kubernetes.io/projected/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-kube-api-access-77h4j\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.603918 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.631818 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecc0db9d-255e-4d75-80e2-bd01a1d41eec" (UID: "ecc0db9d-255e-4d75-80e2-bd01a1d41eec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.706360 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecc0db9d-255e-4d75-80e2-bd01a1d41eec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.890380 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b907112e8f3706c01fc6233d57debf23839e38085110d7cbea967ca82cbcefb" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.890478 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-25msk" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.899301 4717 generic.go:334] "Generic (PLEG): container finished" podID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerID="8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533" exitCode=0 Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.899351 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-767x2" event={"ID":"ecc0db9d-255e-4d75-80e2-bd01a1d41eec","Type":"ContainerDied","Data":"8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533"} Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.899386 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-767x2" event={"ID":"ecc0db9d-255e-4d75-80e2-bd01a1d41eec","Type":"ContainerDied","Data":"d501790d8b6db60995d1b21373cbf3f0b59ca14e315ac2e24a173eddd68f7f75"} Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.899405 4717 scope.go:117] "RemoveContainer" containerID="8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.899556 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-767x2" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.961876 4717 scope.go:117] "RemoveContainer" containerID="0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28" Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.970329 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-767x2"] Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.980385 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-767x2"] Feb 18 12:46:27 crc kubenswrapper[4717]: I0218 12:46:27.985216 4717 scope.go:117] "RemoveContainer" containerID="9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.046428 4717 scope.go:117] "RemoveContainer" containerID="8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533" Feb 18 12:46:28 crc kubenswrapper[4717]: E0218 12:46:28.047733 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533\": container with ID starting with 8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533 not found: ID does not exist" containerID="8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.047837 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533"} err="failed to get container status \"8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533\": rpc error: code = NotFound desc = could not find container \"8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533\": container with ID starting with 8a472c4cd84119a4bb0bcf4042a96cd0d9301df11410902bdb4fe43144f30533 not found: ID does not exist" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.047898 4717 scope.go:117] "RemoveContainer" containerID="0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28" Feb 18 12:46:28 crc kubenswrapper[4717]: E0218 12:46:28.048479 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28\": container with ID starting with 0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28 not found: ID does not exist" containerID="0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.048518 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28"} err="failed to get container status \"0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28\": rpc error: code = NotFound desc = could not find container \"0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28\": container with ID starting with 0c4ca3683057fcef77ca5a4410443cb361cc3c64bcc9286478764cf573bb3d28 not found: ID does not exist" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.048540 4717 scope.go:117] "RemoveContainer" containerID="9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4" Feb 18 12:46:28 crc kubenswrapper[4717]: E0218 12:46:28.048952 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4\": container with ID starting with 9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4 not found: ID does not exist" containerID="9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.049027 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4"} err="failed to get container status \"9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4\": rpc error: code = NotFound desc = could not find container \"9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4\": container with ID starting with 9a493b366e5bae77c5fd56fadfe1d46e0b12c848fd7af96e34812d695e307ba4 not found: ID does not exist" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.429058 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4xhlg/crc-debug-4wjgn"] Feb 18 12:46:28 crc kubenswrapper[4717]: E0218 12:46:28.432058 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerName="extract-content" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.432086 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerName="extract-content" Feb 18 12:46:28 crc kubenswrapper[4717]: E0218 12:46:28.432113 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerName="extract-utilities" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.432122 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerName="extract-utilities" Feb 18 12:46:28 crc kubenswrapper[4717]: E0218 12:46:28.432166 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerName="registry-server" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.432175 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerName="registry-server" Feb 18 12:46:28 crc kubenswrapper[4717]: E0218 12:46:28.432195 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30da568-9115-4cb4-b66b-52194bf653ae" containerName="container-00" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.432203 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30da568-9115-4cb4-b66b-52194bf653ae" containerName="container-00" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.432498 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30da568-9115-4cb4-b66b-52194bf653ae" containerName="container-00" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.432518 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" containerName="registry-server" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.433520 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.522412 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-host\") pod \"crc-debug-4wjgn\" (UID: \"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c\") " pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.522677 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mhh\" (UniqueName: \"kubernetes.io/projected/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-kube-api-access-p6mhh\") pod \"crc-debug-4wjgn\" (UID: \"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c\") " pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.624772 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-host\") pod \"crc-debug-4wjgn\" (UID: \"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c\") " pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.624878 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6mhh\" (UniqueName: \"kubernetes.io/projected/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-kube-api-access-p6mhh\") pod \"crc-debug-4wjgn\" (UID: \"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c\") " pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.624919 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-host\") pod \"crc-debug-4wjgn\" (UID: \"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c\") " pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.644505 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6mhh\" (UniqueName: \"kubernetes.io/projected/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-kube-api-access-p6mhh\") pod \"crc-debug-4wjgn\" (UID: \"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c\") " pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.750873 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" Feb 18 12:46:28 crc kubenswrapper[4717]: W0218 12:46:28.776888 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd5c2cb4_d178_4c0f_acd3_838cf1af9b2c.slice/crio-650425154bd4b2fb92c24fb8098ae56ddf9048124b4e24c0541cca5042de75b7 WatchSource:0}: Error finding container 650425154bd4b2fb92c24fb8098ae56ddf9048124b4e24c0541cca5042de75b7: Status 404 returned error can't find the container with id 650425154bd4b2fb92c24fb8098ae56ddf9048124b4e24c0541cca5042de75b7 Feb 18 12:46:28 crc kubenswrapper[4717]: I0218 12:46:28.911007 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" event={"ID":"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c","Type":"ContainerStarted","Data":"650425154bd4b2fb92c24fb8098ae56ddf9048124b4e24c0541cca5042de75b7"} Feb 18 12:46:29 crc kubenswrapper[4717]: I0218 12:46:29.050354 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30da568-9115-4cb4-b66b-52194bf653ae" path="/var/lib/kubelet/pods/d30da568-9115-4cb4-b66b-52194bf653ae/volumes" Feb 18 12:46:29 crc kubenswrapper[4717]: I0218 12:46:29.051496 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc0db9d-255e-4d75-80e2-bd01a1d41eec" path="/var/lib/kubelet/pods/ecc0db9d-255e-4d75-80e2-bd01a1d41eec/volumes" Feb 18 12:46:29 crc kubenswrapper[4717]: I0218 12:46:29.924811 4717 generic.go:334] "Generic (PLEG): container finished" podID="dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c" containerID="1a2532ac9a8744b7bacfdac43fbba6ba2a2f32164118b8157db609f7ce35a5e4" exitCode=0 Feb 18 12:46:29 crc kubenswrapper[4717]: I0218 12:46:29.924873 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" event={"ID":"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c","Type":"ContainerDied","Data":"1a2532ac9a8744b7bacfdac43fbba6ba2a2f32164118b8157db609f7ce35a5e4"} Feb 18 12:46:30 crc kubenswrapper[4717]: I0218 12:46:30.464786 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4xhlg/crc-debug-4wjgn"] Feb 18 12:46:30 crc kubenswrapper[4717]: I0218 12:46:30.474780 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4xhlg/crc-debug-4wjgn"] Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.039103 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.072360 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-host\") pod \"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c\" (UID: \"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c\") " Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.072525 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-host" (OuterVolumeSpecName: "host") pod "dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c" (UID: "dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.072574 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6mhh\" (UniqueName: \"kubernetes.io/projected/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-kube-api-access-p6mhh\") pod \"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c\" (UID: \"dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c\") " Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.074718 4717 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-host\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.079072 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-kube-api-access-p6mhh" (OuterVolumeSpecName: "kube-api-access-p6mhh") pod "dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c" (UID: "dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c"). InnerVolumeSpecName "kube-api-access-p6mhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.177100 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6mhh\" (UniqueName: \"kubernetes.io/projected/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c-kube-api-access-p6mhh\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.633178 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4xhlg/crc-debug-5t724"] Feb 18 12:46:31 crc kubenswrapper[4717]: E0218 12:46:31.633714 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c" containerName="container-00" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.633729 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c" containerName="container-00" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.633952 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c" containerName="container-00" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.634841 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-5t724" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.688427 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/058c76af-158d-4dce-9ecb-6dbd7352be6a-host\") pod \"crc-debug-5t724\" (UID: \"058c76af-158d-4dce-9ecb-6dbd7352be6a\") " pod="openshift-must-gather-4xhlg/crc-debug-5t724" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.688784 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29tg6\" (UniqueName: \"kubernetes.io/projected/058c76af-158d-4dce-9ecb-6dbd7352be6a-kube-api-access-29tg6\") pod \"crc-debug-5t724\" (UID: \"058c76af-158d-4dce-9ecb-6dbd7352be6a\") " pod="openshift-must-gather-4xhlg/crc-debug-5t724" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.790946 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29tg6\" (UniqueName: \"kubernetes.io/projected/058c76af-158d-4dce-9ecb-6dbd7352be6a-kube-api-access-29tg6\") pod \"crc-debug-5t724\" (UID: \"058c76af-158d-4dce-9ecb-6dbd7352be6a\") " pod="openshift-must-gather-4xhlg/crc-debug-5t724" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.791059 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/058c76af-158d-4dce-9ecb-6dbd7352be6a-host\") pod \"crc-debug-5t724\" (UID: \"058c76af-158d-4dce-9ecb-6dbd7352be6a\") " pod="openshift-must-gather-4xhlg/crc-debug-5t724" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.791209 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/058c76af-158d-4dce-9ecb-6dbd7352be6a-host\") pod \"crc-debug-5t724\" (UID: \"058c76af-158d-4dce-9ecb-6dbd7352be6a\") " pod="openshift-must-gather-4xhlg/crc-debug-5t724" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.821203 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29tg6\" (UniqueName: \"kubernetes.io/projected/058c76af-158d-4dce-9ecb-6dbd7352be6a-kube-api-access-29tg6\") pod \"crc-debug-5t724\" (UID: \"058c76af-158d-4dce-9ecb-6dbd7352be6a\") " pod="openshift-must-gather-4xhlg/crc-debug-5t724" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.944680 4717 scope.go:117] "RemoveContainer" containerID="1a2532ac9a8744b7bacfdac43fbba6ba2a2f32164118b8157db609f7ce35a5e4" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.944765 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-4wjgn" Feb 18 12:46:31 crc kubenswrapper[4717]: I0218 12:46:31.955767 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-5t724" Feb 18 12:46:32 crc kubenswrapper[4717]: W0218 12:46:32.003072 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod058c76af_158d_4dce_9ecb_6dbd7352be6a.slice/crio-62dd053355a09885915315a30ff1943ee4ec4d8f1577e6d0d707d58edb2ecef9 WatchSource:0}: Error finding container 62dd053355a09885915315a30ff1943ee4ec4d8f1577e6d0d707d58edb2ecef9: Status 404 returned error can't find the container with id 62dd053355a09885915315a30ff1943ee4ec4d8f1577e6d0d707d58edb2ecef9 Feb 18 12:46:32 crc kubenswrapper[4717]: I0218 12:46:32.962054 4717 generic.go:334] "Generic (PLEG): container finished" podID="058c76af-158d-4dce-9ecb-6dbd7352be6a" containerID="b38cab8c5ffa86c14f077131a7178860c08aa137f703cfd0a2a105c7e0e34825" exitCode=0 Feb 18 12:46:32 crc kubenswrapper[4717]: I0218 12:46:32.962137 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/crc-debug-5t724" event={"ID":"058c76af-158d-4dce-9ecb-6dbd7352be6a","Type":"ContainerDied","Data":"b38cab8c5ffa86c14f077131a7178860c08aa137f703cfd0a2a105c7e0e34825"} Feb 18 12:46:32 crc kubenswrapper[4717]: I0218 12:46:32.962587 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/crc-debug-5t724" event={"ID":"058c76af-158d-4dce-9ecb-6dbd7352be6a","Type":"ContainerStarted","Data":"62dd053355a09885915315a30ff1943ee4ec4d8f1577e6d0d707d58edb2ecef9"} Feb 18 12:46:33 crc kubenswrapper[4717]: I0218 12:46:33.020685 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4xhlg/crc-debug-5t724"] Feb 18 12:46:33 crc kubenswrapper[4717]: I0218 12:46:33.033105 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4xhlg/crc-debug-5t724"] Feb 18 12:46:33 crc kubenswrapper[4717]: I0218 12:46:33.046889 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c" path="/var/lib/kubelet/pods/dd5c2cb4-d178-4c0f-acd3-838cf1af9b2c/volumes" Feb 18 12:46:34 crc kubenswrapper[4717]: I0218 12:46:34.037706 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:46:34 crc kubenswrapper[4717]: E0218 12:46:34.038294 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:46:34 crc kubenswrapper[4717]: I0218 12:46:34.087233 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-5t724" Feb 18 12:46:34 crc kubenswrapper[4717]: I0218 12:46:34.140795 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/058c76af-158d-4dce-9ecb-6dbd7352be6a-host\") pod \"058c76af-158d-4dce-9ecb-6dbd7352be6a\" (UID: \"058c76af-158d-4dce-9ecb-6dbd7352be6a\") " Feb 18 12:46:34 crc kubenswrapper[4717]: I0218 12:46:34.140903 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29tg6\" (UniqueName: \"kubernetes.io/projected/058c76af-158d-4dce-9ecb-6dbd7352be6a-kube-api-access-29tg6\") pod \"058c76af-158d-4dce-9ecb-6dbd7352be6a\" (UID: \"058c76af-158d-4dce-9ecb-6dbd7352be6a\") " Feb 18 12:46:34 crc kubenswrapper[4717]: I0218 12:46:34.140964 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/058c76af-158d-4dce-9ecb-6dbd7352be6a-host" (OuterVolumeSpecName: "host") pod "058c76af-158d-4dce-9ecb-6dbd7352be6a" (UID: "058c76af-158d-4dce-9ecb-6dbd7352be6a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:46:34 crc kubenswrapper[4717]: I0218 12:46:34.141782 4717 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/058c76af-158d-4dce-9ecb-6dbd7352be6a-host\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:34 crc kubenswrapper[4717]: I0218 12:46:34.146534 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058c76af-158d-4dce-9ecb-6dbd7352be6a-kube-api-access-29tg6" (OuterVolumeSpecName: "kube-api-access-29tg6") pod "058c76af-158d-4dce-9ecb-6dbd7352be6a" (UID: "058c76af-158d-4dce-9ecb-6dbd7352be6a"). InnerVolumeSpecName "kube-api-access-29tg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:46:34 crc kubenswrapper[4717]: I0218 12:46:34.243790 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29tg6\" (UniqueName: \"kubernetes.io/projected/058c76af-158d-4dce-9ecb-6dbd7352be6a-kube-api-access-29tg6\") on node \"crc\" DevicePath \"\"" Feb 18 12:46:34 crc kubenswrapper[4717]: I0218 12:46:34.997103 4717 scope.go:117] "RemoveContainer" containerID="b38cab8c5ffa86c14f077131a7178860c08aa137f703cfd0a2a105c7e0e34825" Feb 18 12:46:34 crc kubenswrapper[4717]: I0218 12:46:34.997190 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/crc-debug-5t724" Feb 18 12:46:35 crc kubenswrapper[4717]: I0218 12:46:35.048345 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="058c76af-158d-4dce-9ecb-6dbd7352be6a" path="/var/lib/kubelet/pods/058c76af-158d-4dce-9ecb-6dbd7352be6a/volumes" Feb 18 12:46:45 crc kubenswrapper[4717]: I0218 12:46:45.036984 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:46:46 crc kubenswrapper[4717]: I0218 12:46:46.132451 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"003acb3fe0951f7b09611ee8f5a9d4d0657e29265ad2614b2da8e339d3d0ea1b"} Feb 18 12:46:49 crc kubenswrapper[4717]: I0218 12:46:49.106872 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55d8f77d98-h2lh4_afc0d8b7-77e8-4fa8-8fea-70d32de7045c/barbican-api/0.log" Feb 18 12:46:49 crc kubenswrapper[4717]: I0218 12:46:49.278914 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55d8f77d98-h2lh4_afc0d8b7-77e8-4fa8-8fea-70d32de7045c/barbican-api-log/0.log" Feb 18 12:46:49 crc kubenswrapper[4717]: I0218 12:46:49.486419 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b857cc988-tklbq_42c182ca-f340-4ddf-ac38-b5eba6d9dbe5/barbican-keystone-listener-log/0.log" Feb 18 12:46:49 crc kubenswrapper[4717]: I0218 12:46:49.853907 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b857cc988-tklbq_42c182ca-f340-4ddf-ac38-b5eba6d9dbe5/barbican-keystone-listener/0.log" Feb 18 12:46:50 crc kubenswrapper[4717]: I0218 12:46:50.035520 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8649d4b975-zblq7_45e47daf-054d-4262-b76c-349fb97ec950/barbican-worker/0.log" Feb 18 12:46:50 crc kubenswrapper[4717]: I0218 12:46:50.047046 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8649d4b975-zblq7_45e47daf-054d-4262-b76c-349fb97ec950/barbican-worker-log/0.log" Feb 18 12:46:50 crc kubenswrapper[4717]: I0218 12:46:50.292238 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg_ca940f40-1894-4b6a-bb57-acac20cd47f3/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:50 crc kubenswrapper[4717]: I0218 12:46:50.317731 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_73da3578-c044-4462-9bdc-0985effde3bf/ceilometer-central-agent/0.log" Feb 18 12:46:50 crc kubenswrapper[4717]: I0218 12:46:50.430097 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_73da3578-c044-4462-9bdc-0985effde3bf/ceilometer-notification-agent/0.log" Feb 18 12:46:50 crc kubenswrapper[4717]: I0218 12:46:50.507583 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_73da3578-c044-4462-9bdc-0985effde3bf/proxy-httpd/0.log" Feb 18 12:46:50 crc kubenswrapper[4717]: I0218 12:46:50.520717 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_73da3578-c044-4462-9bdc-0985effde3bf/sg-core/0.log" Feb 18 12:46:50 crc kubenswrapper[4717]: I0218 12:46:50.698420 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_756321c7-681b-4183-b0ef-4afab35a28ae/cinder-api/0.log" Feb 18 12:46:50 crc kubenswrapper[4717]: I0218 12:46:50.743195 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_756321c7-681b-4183-b0ef-4afab35a28ae/cinder-api-log/0.log" Feb 18 12:46:51 crc kubenswrapper[4717]: I0218 12:46:51.112876 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_96f31374-cbb7-49a4-878b-5667b60b960e/cinder-scheduler/0.log" Feb 18 12:46:51 crc kubenswrapper[4717]: I0218 12:46:51.242234 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_96f31374-cbb7-49a4-878b-5667b60b960e/probe/0.log" Feb 18 12:46:51 crc kubenswrapper[4717]: I0218 12:46:51.958324 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx_38e6fde5-49a9-46e9-bb5d-382aaf00efa7/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:51 crc kubenswrapper[4717]: I0218 12:46:51.964874 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk_16dc15ac-c4c3-4d90-8fd2-20054f92b894/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:52 crc kubenswrapper[4717]: I0218 12:46:52.180228 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-q9j9f_dffcf399-439f-4698-8a0a-b247675685be/init/0.log" Feb 18 12:46:52 crc kubenswrapper[4717]: I0218 12:46:52.480365 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-q9j9f_dffcf399-439f-4698-8a0a-b247675685be/init/0.log" Feb 18 12:46:52 crc kubenswrapper[4717]: I0218 12:46:52.500172 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-dfh57_57c4f818-2860-43d9-9c1b-f99b48449af0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:52 crc kubenswrapper[4717]: I0218 12:46:52.566853 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-q9j9f_dffcf399-439f-4698-8a0a-b247675685be/dnsmasq-dns/0.log" Feb 18 12:46:52 crc kubenswrapper[4717]: I0218 12:46:52.777599 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_05e71379-15cf-4f83-a548-a46ba29caada/glance-httpd/0.log" Feb 18 12:46:52 crc kubenswrapper[4717]: I0218 12:46:52.796222 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_05e71379-15cf-4f83-a548-a46ba29caada/glance-log/0.log" Feb 18 12:46:53 crc kubenswrapper[4717]: I0218 12:46:53.029715 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_71fd55dc-beb4-4d07-af77-f244d5b1d399/glance-httpd/0.log" Feb 18 12:46:53 crc kubenswrapper[4717]: I0218 12:46:53.082548 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_71fd55dc-beb4-4d07-af77-f244d5b1d399/glance-log/0.log" Feb 18 12:46:53 crc kubenswrapper[4717]: I0218 12:46:53.206849 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d949b4564-9ns6m_72d097e6-a40b-4e3e-8376-f3866f63e9d3/horizon/0.log" Feb 18 12:46:53 crc kubenswrapper[4717]: I0218 12:46:53.355657 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5_fd992beb-fe74-4076-8233-3bdc67b5de99/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:53 crc kubenswrapper[4717]: I0218 12:46:53.461567 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8dcqq_a2da22c1-d4ae-46e5-919a-b69a2a8807e1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:53 crc kubenswrapper[4717]: I0218 12:46:53.465184 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d949b4564-9ns6m_72d097e6-a40b-4e3e-8376-f3866f63e9d3/horizon-log/0.log" Feb 18 12:46:53 crc kubenswrapper[4717]: I0218 12:46:53.724862 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_71b7862b-19b8-4921-955c-4948b428f4eb/kube-state-metrics/0.log" Feb 18 12:46:53 crc kubenswrapper[4717]: I0218 12:46:53.743889 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-766b8ffffc-2xlnb_1554ac8b-466a-47d1-a768-b250ee1ca204/keystone-api/0.log" Feb 18 12:46:53 crc kubenswrapper[4717]: I0218 12:46:53.871411 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj_6e45806a-5dfc-4368-b276-a59ba198f17e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:54 crc kubenswrapper[4717]: I0218 12:46:54.152123 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bf89fd777-rchb4_a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c/neutron-api/0.log" Feb 18 12:46:54 crc kubenswrapper[4717]: I0218 12:46:54.252743 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6_3ef3746b-6714-4c50-b0fe-3d5d1632f6c6/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:54 crc kubenswrapper[4717]: I0218 12:46:54.252764 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bf89fd777-rchb4_a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c/neutron-httpd/0.log" Feb 18 12:46:54 crc kubenswrapper[4717]: I0218 12:46:54.888718 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fb3569ab-95c9-42eb-9c7d-979b7c09f862/nova-cell0-conductor-conductor/0.log" Feb 18 12:46:54 crc kubenswrapper[4717]: I0218 12:46:54.931946 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2b4f4f54-2066-4277-a45f-aefd9dc8130c/nova-api-log/0.log" Feb 18 12:46:55 crc kubenswrapper[4717]: I0218 12:46:55.147756 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2b4f4f54-2066-4277-a45f-aefd9dc8130c/nova-api-api/0.log" Feb 18 12:46:55 crc kubenswrapper[4717]: I0218 12:46:55.184243 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_377276ef-9093-4bae-954b-b833c89261ea/nova-cell1-conductor-conductor/0.log" Feb 18 12:46:55 crc kubenswrapper[4717]: I0218 12:46:55.237206 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c95e107a-542e-4a31-98f9-aed639f1fc42/nova-cell1-novncproxy-novncproxy/0.log" Feb 18 12:46:55 crc kubenswrapper[4717]: I0218 12:46:55.498536 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-cgnxc_2facfe0c-cdcb-44fb-ab60-197e5cf58fb3/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:55 crc kubenswrapper[4717]: I0218 12:46:55.551733 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc/nova-metadata-log/0.log" Feb 18 12:46:55 crc kubenswrapper[4717]: I0218 12:46:55.895393 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e9941a28-9836-48c1-bab2-c55c92861692/nova-scheduler-scheduler/0.log" Feb 18 12:46:55 crc kubenswrapper[4717]: I0218 12:46:55.916046 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6ba708c6-57e5-4406-8773-2a700b0be0fc/mysql-bootstrap/0.log" Feb 18 12:46:56 crc kubenswrapper[4717]: I0218 12:46:56.109880 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6ba708c6-57e5-4406-8773-2a700b0be0fc/galera/0.log" Feb 18 12:46:56 crc kubenswrapper[4717]: I0218 12:46:56.134133 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6ba708c6-57e5-4406-8773-2a700b0be0fc/mysql-bootstrap/0.log" Feb 18 12:46:56 crc kubenswrapper[4717]: I0218 12:46:56.348203 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9e12df56-53ef-42bc-9f15-2c7a89b391d1/mysql-bootstrap/0.log" Feb 18 12:46:56 crc kubenswrapper[4717]: I0218 12:46:56.516069 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9e12df56-53ef-42bc-9f15-2c7a89b391d1/galera/0.log" Feb 18 12:46:56 crc kubenswrapper[4717]: I0218 12:46:56.542506 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9e12df56-53ef-42bc-9f15-2c7a89b391d1/mysql-bootstrap/0.log" Feb 18 12:46:56 crc kubenswrapper[4717]: I0218 12:46:56.719745 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9b4ef341-6659-4283-81b4-78674dfd9fc8/openstackclient/0.log" Feb 18 12:46:56 crc kubenswrapper[4717]: I0218 12:46:56.763933 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc/nova-metadata-metadata/0.log" Feb 18 12:46:56 crc kubenswrapper[4717]: I0218 12:46:56.852371 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cqvjv_6e3a25d1-3ad3-4ecb-bca6-84643516d734/ovn-controller/0.log" Feb 18 12:46:56 crc kubenswrapper[4717]: I0218 12:46:56.966126 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-htmxg_b720848b-1453-4de9-982e-de66099bb8f7/openstack-network-exporter/0.log" Feb 18 12:46:57 crc kubenswrapper[4717]: I0218 12:46:57.118955 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzbgg_d145d1aa-1d6c-4285-9670-42d3bb4ea1cd/ovsdb-server-init/0.log" Feb 18 12:46:57 crc kubenswrapper[4717]: I0218 12:46:57.417107 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzbgg_d145d1aa-1d6c-4285-9670-42d3bb4ea1cd/ovs-vswitchd/0.log" Feb 18 12:46:57 crc kubenswrapper[4717]: I0218 12:46:57.430755 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzbgg_d145d1aa-1d6c-4285-9670-42d3bb4ea1cd/ovsdb-server-init/0.log" Feb 18 12:46:57 crc kubenswrapper[4717]: I0218 12:46:57.491848 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzbgg_d145d1aa-1d6c-4285-9670-42d3bb4ea1cd/ovsdb-server/0.log" Feb 18 12:46:57 crc kubenswrapper[4717]: I0218 12:46:57.652010 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_31402202-302e-46a9-b565-d7b8143153d5/openstack-network-exporter/0.log" Feb 18 12:46:57 crc kubenswrapper[4717]: I0218 12:46:57.726945 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-pnbv9_5cdc94da-0dcb-40a3-9800-2655bae295cb/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:57 crc kubenswrapper[4717]: I0218 12:46:57.765495 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_31402202-302e-46a9-b565-d7b8143153d5/ovn-northd/0.log" Feb 18 12:46:57 crc kubenswrapper[4717]: I0218 12:46:57.981654 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_77683781-6580-4589-8869-bbaea0d6d8a0/openstack-network-exporter/0.log" Feb 18 12:46:58 crc kubenswrapper[4717]: I0218 12:46:58.083246 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_77683781-6580-4589-8869-bbaea0d6d8a0/ovsdbserver-nb/0.log" Feb 18 12:46:58 crc kubenswrapper[4717]: I0218 12:46:58.186757 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_41966c80-d352-4f94-b011-1ef922e3250f/openstack-network-exporter/0.log" Feb 18 12:46:58 crc kubenswrapper[4717]: I0218 12:46:58.204253 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_41966c80-d352-4f94-b011-1ef922e3250f/ovsdbserver-sb/0.log" Feb 18 12:46:58 crc kubenswrapper[4717]: I0218 12:46:58.377994 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-765f565894-lj9d4_94b4824f-c89e-4740-ab96-6a36d2f7abb7/placement-api/0.log" Feb 18 12:46:58 crc kubenswrapper[4717]: I0218 12:46:58.542159 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-765f565894-lj9d4_94b4824f-c89e-4740-ab96-6a36d2f7abb7/placement-log/0.log" Feb 18 12:46:58 crc kubenswrapper[4717]: I0218 12:46:58.607216 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1c31df4b-bbb2-4bdf-9c36-db03b261067c/setup-container/0.log" Feb 18 12:46:58 crc kubenswrapper[4717]: I0218 12:46:58.742163 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1c31df4b-bbb2-4bdf-9c36-db03b261067c/rabbitmq/0.log" Feb 18 12:46:58 crc kubenswrapper[4717]: I0218 12:46:58.805221 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1c31df4b-bbb2-4bdf-9c36-db03b261067c/setup-container/0.log" Feb 18 12:46:58 crc kubenswrapper[4717]: I0218 12:46:58.837774 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_57c90bed-ebc6-4053-b92b-1622edda048a/setup-container/0.log" Feb 18 12:46:59 crc kubenswrapper[4717]: I0218 12:46:59.064986 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_57c90bed-ebc6-4053-b92b-1622edda048a/rabbitmq/0.log" Feb 18 12:46:59 crc kubenswrapper[4717]: I0218 12:46:59.073072 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_57c90bed-ebc6-4053-b92b-1622edda048a/setup-container/0.log" Feb 18 12:46:59 crc kubenswrapper[4717]: I0218 12:46:59.192825 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nslld_0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:59 crc kubenswrapper[4717]: I0218 12:46:59.525425 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-tjgb9_51e7bb49-4d7d-44a3-bb44-dcf18ac0f219/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:59 crc kubenswrapper[4717]: I0218 12:46:59.619989 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27_74c7ea9f-0f71-44a4-b3cb-8fd20e90f456/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:59 crc kubenswrapper[4717]: I0218 12:46:59.758020 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-kjq2f_32a22361-b7f3-4429-a590-edecf026891c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:46:59 crc kubenswrapper[4717]: I0218 12:46:59.890877 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-98rv7_c1782f70-1ae6-42da-98ad-f42f2495b261/ssh-known-hosts-edpm-deployment/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.131522 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-586778dd75-mtms6_e21881f2-73fb-4d0f-974c-a74694a2b301/proxy-server/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.182141 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-586778dd75-mtms6_e21881f2-73fb-4d0f-974c-a74694a2b301/proxy-httpd/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.261020 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-66d7l_ac3ad8c1-04a7-46f5-9c76-98c92e3c2158/swift-ring-rebalance/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.393900 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/account-auditor/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.431842 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/account-reaper/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.504913 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/account-replicator/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.593667 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/account-server/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.667448 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/container-auditor/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.720862 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/container-replicator/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.775783 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/container-server/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.802353 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/container-updater/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.940081 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/object-auditor/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.980699 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/object-expirer/0.log" Feb 18 12:47:00 crc kubenswrapper[4717]: I0218 12:47:00.999746 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/object-server/0.log" Feb 18 12:47:01 crc kubenswrapper[4717]: I0218 12:47:01.023562 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/object-replicator/0.log" Feb 18 12:47:01 crc kubenswrapper[4717]: I0218 12:47:01.205132 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/object-updater/0.log" Feb 18 12:47:01 crc kubenswrapper[4717]: I0218 12:47:01.242564 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/rsync/0.log" Feb 18 12:47:01 crc kubenswrapper[4717]: I0218 12:47:01.249507 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/swift-recon-cron/0.log" Feb 18 12:47:01 crc kubenswrapper[4717]: I0218 12:47:01.499914 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv_95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:47:01 crc kubenswrapper[4717]: I0218 12:47:01.549200 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_00f1b8ee-1760-4308-b796-155234b0a811/tempest-tests-tempest-tests-runner/0.log" Feb 18 12:47:01 crc kubenswrapper[4717]: I0218 12:47:01.732132 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_67131484-bd44-40b0-92da-d06886a8179b/test-operator-logs-container/0.log" Feb 18 12:47:01 crc kubenswrapper[4717]: I0218 12:47:01.760519 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5sft2_2a2cdadf-7168-4073-ac4d-68893d4f61de/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:47:12 crc kubenswrapper[4717]: I0218 12:47:12.264671 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_760991b3-fcd6-4ea6-bc3b-3fad54f0c70c/memcached/0.log" Feb 18 12:47:27 crc kubenswrapper[4717]: I0218 12:47:27.522313 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/util/0.log" Feb 18 12:47:27 crc kubenswrapper[4717]: I0218 12:47:27.740740 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/pull/0.log" Feb 18 12:47:27 crc kubenswrapper[4717]: I0218 12:47:27.752250 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/util/0.log" Feb 18 12:47:27 crc kubenswrapper[4717]: I0218 12:47:27.752809 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/pull/0.log" Feb 18 12:47:27 crc kubenswrapper[4717]: I0218 12:47:27.940458 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/util/0.log" Feb 18 12:47:27 crc kubenswrapper[4717]: I0218 12:47:27.971974 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/extract/0.log" Feb 18 12:47:27 crc kubenswrapper[4717]: I0218 12:47:27.972758 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/pull/0.log" Feb 18 12:47:28 crc kubenswrapper[4717]: I0218 12:47:28.464722 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-2n2t2_986ac762-6758-4402-a5c9-849780ff7fab/manager/0.log" Feb 18 12:47:28 crc kubenswrapper[4717]: I0218 12:47:28.858513 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-v6qrx_ba6c36b6-f446-4cc2-b1b0-9ebb6146dfde/manager/0.log" Feb 18 12:47:28 crc kubenswrapper[4717]: I0218 12:47:28.965968 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-9cnsb_f9800e95-aed6-4d9b-9e88-b6a5f303ee16/manager/0.log" Feb 18 12:47:29 crc kubenswrapper[4717]: I0218 12:47:29.290618 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-rvxd2_e82b0608-77fd-4e73-bafb-00a7b43b6299/manager/0.log" Feb 18 12:47:29 crc kubenswrapper[4717]: I0218 12:47:29.620972 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-jhnvm_3503ed6a-e486-404f-8ac3-df63d9d28c2d/manager/0.log" Feb 18 12:47:29 crc kubenswrapper[4717]: I0218 12:47:29.792097 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-ldqdh_b6f768c0-a04d-49f7-a0ae-5ea5ee09c26b/manager/0.log" Feb 18 12:47:29 crc kubenswrapper[4717]: I0218 12:47:29.968813 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-szzvb_96c16cf0-31b6-4830-b92f-f25b4ce11979/manager/0.log" Feb 18 12:47:30 crc kubenswrapper[4717]: I0218 12:47:30.114275 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-2lmml_8d4a2d32-4724-4580-a542-7552e580ed15/manager/0.log" Feb 18 12:47:30 crc kubenswrapper[4717]: I0218 12:47:30.201725 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-nljkj_927be7f4-3bc1-42c8-917f-8b898bbbc21a/manager/0.log" Feb 18 12:47:30 crc kubenswrapper[4717]: I0218 12:47:30.396636 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-sl49j_a14214f1-4961-4ade-ba45-d48139b6fd0d/manager/0.log" Feb 18 12:47:30 crc kubenswrapper[4717]: I0218 12:47:30.656987 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-52fmp_5d07e1a5-0372-4721-ac7a-66c568e32be1/manager/0.log" Feb 18 12:47:30 crc kubenswrapper[4717]: I0218 12:47:30.890996 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-xdtl8_7c5e0309-c138-4668-bad9-eacff0124d24/manager/0.log" Feb 18 12:47:31 crc kubenswrapper[4717]: I0218 12:47:31.159649 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24_6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe/manager/0.log" Feb 18 12:47:31 crc kubenswrapper[4717]: I0218 12:47:31.605284 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-84d9946dcc-wjrpz_0807bf80-9dc3-48d9-8cbe-748f85b2089f/operator/0.log" Feb 18 12:47:32 crc kubenswrapper[4717]: I0218 12:47:32.041641 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6gp6j_0b3c9586-d52a-4df4-a96f-91773c3bfbfa/registry-server/0.log" Feb 18 12:47:32 crc kubenswrapper[4717]: I0218 12:47:32.330918 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-lx8wj_196844a3-3220-4557-93a1-dc0887bbb53f/manager/0.log" Feb 18 12:47:32 crc kubenswrapper[4717]: I0218 12:47:32.505959 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-n7n5r_88c2fec0-988b-4496-b054-43f965e23324/manager/0.log" Feb 18 12:47:32 crc kubenswrapper[4717]: I0218 12:47:32.534925 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-j9pkf_3b988944-4f1b-4fb3-89ff-b1a0e61853dc/manager/0.log" Feb 18 12:47:32 crc kubenswrapper[4717]: I0218 12:47:32.744745 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jrzq4_089bc44f-bd8a-45b5-a497-17cfc2d38bee/operator/0.log" Feb 18 12:47:32 crc kubenswrapper[4717]: I0218 12:47:32.927371 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-26mhv_3faac3ae-2788-4a36-8241-09a601267885/manager/0.log" Feb 18 12:47:33 crc kubenswrapper[4717]: I0218 12:47:33.223218 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-cmrvc_e95271e1-5edd-4862-9dd9-e7ad1feb0ed0/manager/0.log" Feb 18 12:47:33 crc kubenswrapper[4717]: I0218 12:47:33.234743 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-9tf7v_e2e22987-3a27-4550-8593-c54e5628e941/manager/0.log" Feb 18 12:47:33 crc kubenswrapper[4717]: I0218 12:47:33.478599 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-h59sk_886d7474-df3b-4777-bd48-d3bf188f7fc9/manager/0.log" Feb 18 12:47:34 crc kubenswrapper[4717]: I0218 12:47:34.014352 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-846fd54586-rqvhv_fa9ea26a-44d8-4c4d-8766-d1c19fa59d70/manager/0.log" Feb 18 12:47:35 crc kubenswrapper[4717]: I0218 12:47:35.971442 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-5lvkl_eca115e0-882d-4173-a714-1883215088b5/manager/0.log" Feb 18 12:47:52 crc kubenswrapper[4717]: I0218 12:47:52.647869 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jmlmn_c53aaac1-4a8c-439e-8d51-60054a95ed11/control-plane-machine-set-operator/0.log" Feb 18 12:47:52 crc kubenswrapper[4717]: I0218 12:47:52.872789 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kzbdk_baa2972a-fc13-4b3b-bf4b-9dceaf35db41/kube-rbac-proxy/0.log" Feb 18 12:47:52 crc kubenswrapper[4717]: I0218 12:47:52.918134 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kzbdk_baa2972a-fc13-4b3b-bf4b-9dceaf35db41/machine-api-operator/0.log" Feb 18 12:48:04 crc kubenswrapper[4717]: I0218 12:48:04.975335 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mr9jc_d10e332f-4255-4315-bf68-1b479919ed9c/cert-manager-controller/0.log" Feb 18 12:48:05 crc kubenswrapper[4717]: I0218 12:48:05.356635 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-qx54w_70bc2303-bab2-48bc-a4a3-4c19b86571aa/cert-manager-cainjector/0.log" Feb 18 12:48:05 crc kubenswrapper[4717]: I0218 12:48:05.409758 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8dvlf_efe0486e-8153-4083-aedf-15085839219b/cert-manager-webhook/0.log" Feb 18 12:48:18 crc kubenswrapper[4717]: I0218 12:48:18.692594 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-z9wv4_f80bcf06-9be6-4c29-9ed7-d575837ff0d6/nmstate-console-plugin/0.log" Feb 18 12:48:18 crc kubenswrapper[4717]: I0218 12:48:18.861118 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-j27f6_58fee9d2-2e42-46e4-b5a2-8b8c80a52424/nmstate-handler/0.log" Feb 18 12:48:19 crc kubenswrapper[4717]: I0218 12:48:19.143231 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-nb8mk_83d51357-d0dc-4297-9449-a066463019f7/kube-rbac-proxy/0.log" Feb 18 12:48:19 crc kubenswrapper[4717]: I0218 12:48:19.286518 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-nb8mk_83d51357-d0dc-4297-9449-a066463019f7/nmstate-metrics/0.log" Feb 18 12:48:19 crc kubenswrapper[4717]: I0218 12:48:19.471946 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-sj6bs_42edbbd9-e0db-4a1f-b9fc-c0987cae7f48/nmstate-operator/0.log" Feb 18 12:48:19 crc kubenswrapper[4717]: I0218 12:48:19.539047 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-4xf48_7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d/nmstate-webhook/0.log" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.128652 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2dgkn"] Feb 18 12:48:45 crc kubenswrapper[4717]: E0218 12:48:45.139422 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058c76af-158d-4dce-9ecb-6dbd7352be6a" containerName="container-00" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.139462 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="058c76af-158d-4dce-9ecb-6dbd7352be6a" containerName="container-00" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.139959 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="058c76af-158d-4dce-9ecb-6dbd7352be6a" containerName="container-00" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.155088 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.203498 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dgkn"] Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.224492 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdnh4\" (UniqueName: \"kubernetes.io/projected/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-kube-api-access-pdnh4\") pod \"certified-operators-2dgkn\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.224561 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-catalog-content\") pod \"certified-operators-2dgkn\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.224639 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-utilities\") pod \"certified-operators-2dgkn\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.326903 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdnh4\" (UniqueName: \"kubernetes.io/projected/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-kube-api-access-pdnh4\") pod \"certified-operators-2dgkn\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.326991 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-catalog-content\") pod \"certified-operators-2dgkn\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.327086 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-utilities\") pod \"certified-operators-2dgkn\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.327656 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-catalog-content\") pod \"certified-operators-2dgkn\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.327723 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-utilities\") pod \"certified-operators-2dgkn\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.363011 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdnh4\" (UniqueName: \"kubernetes.io/projected/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-kube-api-access-pdnh4\") pod \"certified-operators-2dgkn\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:45 crc kubenswrapper[4717]: I0218 12:48:45.501231 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.104344 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fbbnw_060557a2-52b7-4e87-908f-0ea8b0febb4c/kube-rbac-proxy/0.log" Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.212962 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fbbnw_060557a2-52b7-4e87-908f-0ea8b0febb4c/controller/0.log" Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.350912 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-frr-files/0.log" Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.555119 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-reloader/0.log" Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.584465 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-frr-files/0.log" Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.638060 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-metrics/0.log" Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.659591 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-reloader/0.log" Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.913654 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-reloader/0.log" Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.918948 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-frr-files/0.log" Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.929899 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dgkn"] Feb 18 12:48:46 crc kubenswrapper[4717]: I0218 12:48:46.951288 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-metrics/0.log" Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.193927 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-metrics/0.log" Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.225069 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dgkn" event={"ID":"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e","Type":"ContainerStarted","Data":"0e3d1033a5f4353cbe5127c8c63e690dc8bfcc54a966bb1c6e71ddc32c5afc2f"} Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.415065 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-metrics/0.log" Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.439462 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-reloader/0.log" Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.453063 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/controller/0.log" Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.460288 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-frr-files/0.log" Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.614568 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/frr-metrics/0.log" Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.648353 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/kube-rbac-proxy-frr/0.log" Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.704061 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/kube-rbac-proxy/0.log" Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.822068 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/reloader/0.log" Feb 18 12:48:47 crc kubenswrapper[4717]: I0218 12:48:47.942951 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-xr4mg_3c7a04c5-e38e-41bf-9343-b567857783d6/frr-k8s-webhook-server/0.log" Feb 18 12:48:48 crc kubenswrapper[4717]: I0218 12:48:48.116657 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6ccf94b89b-k7n5s_4cc66c29-35b2-4c85-95d0-ad78febc48c8/manager/0.log" Feb 18 12:48:48 crc kubenswrapper[4717]: I0218 12:48:48.240982 4717 generic.go:334] "Generic (PLEG): container finished" podID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerID="e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2" exitCode=0 Feb 18 12:48:48 crc kubenswrapper[4717]: I0218 12:48:48.241034 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dgkn" event={"ID":"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e","Type":"ContainerDied","Data":"e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2"} Feb 18 12:48:48 crc kubenswrapper[4717]: I0218 12:48:48.323143 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fb5446db6-w9jm7_d5bf9065-9c80-484d-9700-dc484f20a071/webhook-server/0.log" Feb 18 12:48:48 crc kubenswrapper[4717]: I0218 12:48:48.455690 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vfwxr_d668cbc3-c191-43fc-bb6f-64f4b7bdb969/kube-rbac-proxy/0.log" Feb 18 12:48:49 crc kubenswrapper[4717]: I0218 12:48:49.091577 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vfwxr_d668cbc3-c191-43fc-bb6f-64f4b7bdb969/speaker/0.log" Feb 18 12:48:49 crc kubenswrapper[4717]: I0218 12:48:49.189653 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/frr/0.log" Feb 18 12:48:50 crc kubenswrapper[4717]: I0218 12:48:50.264302 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dgkn" event={"ID":"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e","Type":"ContainerStarted","Data":"9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061"} Feb 18 12:48:51 crc kubenswrapper[4717]: I0218 12:48:51.276694 4717 generic.go:334] "Generic (PLEG): container finished" podID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerID="9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061" exitCode=0 Feb 18 12:48:51 crc kubenswrapper[4717]: I0218 12:48:51.276746 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dgkn" event={"ID":"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e","Type":"ContainerDied","Data":"9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061"} Feb 18 12:48:52 crc kubenswrapper[4717]: I0218 12:48:52.287682 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dgkn" event={"ID":"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e","Type":"ContainerStarted","Data":"d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8"} Feb 18 12:48:52 crc kubenswrapper[4717]: I0218 12:48:52.316682 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2dgkn" podStartSLOduration=3.826285092 podStartE2EDuration="7.316657659s" podCreationTimestamp="2026-02-18 12:48:45 +0000 UTC" firstStartedPulling="2026-02-18 12:48:48.245457968 +0000 UTC m=+3562.647559284" lastFinishedPulling="2026-02-18 12:48:51.735830535 +0000 UTC m=+3566.137931851" observedRunningTime="2026-02-18 12:48:52.310657693 +0000 UTC m=+3566.712759009" watchObservedRunningTime="2026-02-18 12:48:52.316657659 +0000 UTC m=+3566.718758975" Feb 18 12:48:55 crc kubenswrapper[4717]: I0218 12:48:55.502483 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:55 crc kubenswrapper[4717]: I0218 12:48:55.503106 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:55 crc kubenswrapper[4717]: I0218 12:48:55.551979 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:56 crc kubenswrapper[4717]: I0218 12:48:56.385303 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:56 crc kubenswrapper[4717]: I0218 12:48:56.911601 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dgkn"] Feb 18 12:48:58 crc kubenswrapper[4717]: I0218 12:48:58.340991 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2dgkn" podUID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerName="registry-server" containerID="cri-o://d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8" gracePeriod=2 Feb 18 12:48:58 crc kubenswrapper[4717]: I0218 12:48:58.808471 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:58 crc kubenswrapper[4717]: I0218 12:48:58.928086 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-catalog-content\") pod \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " Feb 18 12:48:58 crc kubenswrapper[4717]: I0218 12:48:58.928145 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdnh4\" (UniqueName: \"kubernetes.io/projected/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-kube-api-access-pdnh4\") pod \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " Feb 18 12:48:58 crc kubenswrapper[4717]: I0218 12:48:58.928196 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-utilities\") pod \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\" (UID: \"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e\") " Feb 18 12:48:58 crc kubenswrapper[4717]: I0218 12:48:58.929297 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-utilities" (OuterVolumeSpecName: "utilities") pod "7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" (UID: "7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:48:58 crc kubenswrapper[4717]: I0218 12:48:58.936671 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-kube-api-access-pdnh4" (OuterVolumeSpecName: "kube-api-access-pdnh4") pod "7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" (UID: "7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e"). InnerVolumeSpecName "kube-api-access-pdnh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:48:58 crc kubenswrapper[4717]: I0218 12:48:58.986655 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" (UID: "7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.031408 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.031473 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdnh4\" (UniqueName: \"kubernetes.io/projected/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-kube-api-access-pdnh4\") on node \"crc\" DevicePath \"\"" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.031495 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.352992 4717 generic.go:334] "Generic (PLEG): container finished" podID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerID="d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8" exitCode=0 Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.353091 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dgkn" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.353094 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dgkn" event={"ID":"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e","Type":"ContainerDied","Data":"d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8"} Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.353560 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dgkn" event={"ID":"7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e","Type":"ContainerDied","Data":"0e3d1033a5f4353cbe5127c8c63e690dc8bfcc54a966bb1c6e71ddc32c5afc2f"} Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.353586 4717 scope.go:117] "RemoveContainer" containerID="d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.378616 4717 scope.go:117] "RemoveContainer" containerID="9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.386648 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dgkn"] Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.398965 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2dgkn"] Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.415641 4717 scope.go:117] "RemoveContainer" containerID="e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.459498 4717 scope.go:117] "RemoveContainer" containerID="d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8" Feb 18 12:48:59 crc kubenswrapper[4717]: E0218 12:48:59.459905 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8\": container with ID starting with d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8 not found: ID does not exist" containerID="d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.460077 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8"} err="failed to get container status \"d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8\": rpc error: code = NotFound desc = could not find container \"d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8\": container with ID starting with d3638096a1fdbd8cda82d7eb0a04ec904103f27043260f6a884a74163c566ad8 not found: ID does not exist" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.460174 4717 scope.go:117] "RemoveContainer" containerID="9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061" Feb 18 12:48:59 crc kubenswrapper[4717]: E0218 12:48:59.460698 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061\": container with ID starting with 9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061 not found: ID does not exist" containerID="9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.461056 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061"} err="failed to get container status \"9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061\": rpc error: code = NotFound desc = could not find container \"9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061\": container with ID starting with 9696d67a0a4ed85f750c361ec3d19e3ce6adc2a4e9201287805059b1796ac061 not found: ID does not exist" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.461147 4717 scope.go:117] "RemoveContainer" containerID="e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2" Feb 18 12:48:59 crc kubenswrapper[4717]: E0218 12:48:59.461609 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2\": container with ID starting with e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2 not found: ID does not exist" containerID="e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2" Feb 18 12:48:59 crc kubenswrapper[4717]: I0218 12:48:59.461703 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2"} err="failed to get container status \"e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2\": rpc error: code = NotFound desc = could not find container \"e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2\": container with ID starting with e29fdff5923df0d8441bc84a06f89bc5319bbea4feb7d2d29c97485a89b7f0f2 not found: ID does not exist" Feb 18 12:49:01 crc kubenswrapper[4717]: I0218 12:49:01.047173 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" path="/var/lib/kubelet/pods/7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e/volumes" Feb 18 12:49:03 crc kubenswrapper[4717]: I0218 12:49:03.812756 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/util/0.log" Feb 18 12:49:04 crc kubenswrapper[4717]: I0218 12:49:04.029747 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/pull/0.log" Feb 18 12:49:04 crc kubenswrapper[4717]: I0218 12:49:04.054740 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/pull/0.log" Feb 18 12:49:04 crc kubenswrapper[4717]: I0218 12:49:04.084404 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/util/0.log" Feb 18 12:49:04 crc kubenswrapper[4717]: I0218 12:49:04.240302 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/pull/0.log" Feb 18 12:49:04 crc kubenswrapper[4717]: I0218 12:49:04.275082 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/extract/0.log" Feb 18 12:49:04 crc kubenswrapper[4717]: I0218 12:49:04.277249 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/util/0.log" Feb 18 12:49:04 crc kubenswrapper[4717]: I0218 12:49:04.441570 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-utilities/0.log" Feb 18 12:49:04 crc kubenswrapper[4717]: I0218 12:49:04.634676 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-content/0.log" Feb 18 12:49:04 crc kubenswrapper[4717]: I0218 12:49:04.668991 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-utilities/0.log" Feb 18 12:49:04 crc kubenswrapper[4717]: I0218 12:49:04.669037 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-content/0.log" Feb 18 12:49:05 crc kubenswrapper[4717]: I0218 12:49:05.107552 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-utilities/0.log" Feb 18 12:49:05 crc kubenswrapper[4717]: I0218 12:49:05.137020 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-content/0.log" Feb 18 12:49:05 crc kubenswrapper[4717]: I0218 12:49:05.418246 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-utilities/0.log" Feb 18 12:49:05 crc kubenswrapper[4717]: I0218 12:49:05.603038 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-utilities/0.log" Feb 18 12:49:05 crc kubenswrapper[4717]: I0218 12:49:05.690528 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-content/0.log" Feb 18 12:49:05 crc kubenswrapper[4717]: I0218 12:49:05.703011 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-content/0.log" Feb 18 12:49:05 crc kubenswrapper[4717]: I0218 12:49:05.832686 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/registry-server/0.log" Feb 18 12:49:05 crc kubenswrapper[4717]: I0218 12:49:05.932488 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-utilities/0.log" Feb 18 12:49:05 crc kubenswrapper[4717]: I0218 12:49:05.979547 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-content/0.log" Feb 18 12:49:06 crc kubenswrapper[4717]: I0218 12:49:06.246519 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/util/0.log" Feb 18 12:49:06 crc kubenswrapper[4717]: I0218 12:49:06.539980 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/pull/0.log" Feb 18 12:49:06 crc kubenswrapper[4717]: I0218 12:49:06.577300 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/util/0.log" Feb 18 12:49:06 crc kubenswrapper[4717]: I0218 12:49:06.582525 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/pull/0.log" Feb 18 12:49:06 crc kubenswrapper[4717]: I0218 12:49:06.743868 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/pull/0.log" Feb 18 12:49:06 crc kubenswrapper[4717]: I0218 12:49:06.778149 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/util/0.log" Feb 18 12:49:06 crc kubenswrapper[4717]: I0218 12:49:06.797543 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/registry-server/0.log" Feb 18 12:49:06 crc kubenswrapper[4717]: I0218 12:49:06.815897 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/extract/0.log" Feb 18 12:49:06 crc kubenswrapper[4717]: I0218 12:49:06.975433 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-h4tq9_27aacb4e-b587-400b-a73b-d7d27d3e2bb6/marketplace-operator/0.log" Feb 18 12:49:07 crc kubenswrapper[4717]: I0218 12:49:07.078325 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-utilities/0.log" Feb 18 12:49:07 crc kubenswrapper[4717]: I0218 12:49:07.265947 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-utilities/0.log" Feb 18 12:49:07 crc kubenswrapper[4717]: I0218 12:49:07.288411 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-content/0.log" Feb 18 12:49:07 crc kubenswrapper[4717]: I0218 12:49:07.312504 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-content/0.log" Feb 18 12:49:07 crc kubenswrapper[4717]: I0218 12:49:07.556611 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-utilities/0.log" Feb 18 12:49:07 crc kubenswrapper[4717]: I0218 12:49:07.570540 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-content/0.log" Feb 18 12:49:07 crc kubenswrapper[4717]: I0218 12:49:07.687106 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/registry-server/0.log" Feb 18 12:49:07 crc kubenswrapper[4717]: I0218 12:49:07.771357 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-utilities/0.log" Feb 18 12:49:07 crc kubenswrapper[4717]: I0218 12:49:07.959472 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-utilities/0.log" Feb 18 12:49:07 crc kubenswrapper[4717]: I0218 12:49:07.969593 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-content/0.log" Feb 18 12:49:08 crc kubenswrapper[4717]: I0218 12:49:08.004310 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-content/0.log" Feb 18 12:49:08 crc kubenswrapper[4717]: I0218 12:49:08.148665 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-content/0.log" Feb 18 12:49:08 crc kubenswrapper[4717]: I0218 12:49:08.207173 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-utilities/0.log" Feb 18 12:49:08 crc kubenswrapper[4717]: I0218 12:49:08.370505 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/registry-server/0.log" Feb 18 12:49:12 crc kubenswrapper[4717]: I0218 12:49:12.773502 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:49:12 crc kubenswrapper[4717]: I0218 12:49:12.774070 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:49:41 crc kubenswrapper[4717]: E0218 12:49:41.109602 4717 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.201:40880->38.102.83.201:43037: write tcp 38.102.83.201:40880->38.102.83.201:43037: write: broken pipe Feb 18 12:49:42 crc kubenswrapper[4717]: I0218 12:49:42.772968 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:49:42 crc kubenswrapper[4717]: I0218 12:49:42.773324 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:50:12 crc kubenswrapper[4717]: I0218 12:50:12.772748 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:50:12 crc kubenswrapper[4717]: I0218 12:50:12.773671 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:50:12 crc kubenswrapper[4717]: I0218 12:50:12.773764 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:50:12 crc kubenswrapper[4717]: I0218 12:50:12.775408 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"003acb3fe0951f7b09611ee8f5a9d4d0657e29265ad2614b2da8e339d3d0ea1b"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:50:12 crc kubenswrapper[4717]: I0218 12:50:12.775517 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://003acb3fe0951f7b09611ee8f5a9d4d0657e29265ad2614b2da8e339d3d0ea1b" gracePeriod=600 Feb 18 12:50:13 crc kubenswrapper[4717]: I0218 12:50:13.058094 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="003acb3fe0951f7b09611ee8f5a9d4d0657e29265ad2614b2da8e339d3d0ea1b" exitCode=0 Feb 18 12:50:13 crc kubenswrapper[4717]: I0218 12:50:13.062974 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"003acb3fe0951f7b09611ee8f5a9d4d0657e29265ad2614b2da8e339d3d0ea1b"} Feb 18 12:50:13 crc kubenswrapper[4717]: I0218 12:50:13.063050 4717 scope.go:117] "RemoveContainer" containerID="703b0a1a7b6018fb6faea4356186d93d158968b9753c70c35b17368b598f71b0" Feb 18 12:50:14 crc kubenswrapper[4717]: I0218 12:50:14.085752 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a"} Feb 18 12:50:34 crc kubenswrapper[4717]: I0218 12:50:34.415907 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-586778dd75-mtms6" podUID="e21881f2-73fb-4d0f-974c-a74694a2b301" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 18 12:51:04 crc kubenswrapper[4717]: I0218 12:51:04.332250 4717 generic.go:334] "Generic (PLEG): container finished" podID="8babfa5e-44d2-4766-976c-54881af09657" containerID="3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850" exitCode=0 Feb 18 12:51:04 crc kubenswrapper[4717]: I0218 12:51:04.332366 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4xhlg/must-gather-rclqm" event={"ID":"8babfa5e-44d2-4766-976c-54881af09657","Type":"ContainerDied","Data":"3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850"} Feb 18 12:51:04 crc kubenswrapper[4717]: I0218 12:51:04.333448 4717 scope.go:117] "RemoveContainer" containerID="3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850" Feb 18 12:51:05 crc kubenswrapper[4717]: I0218 12:51:05.369658 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4xhlg_must-gather-rclqm_8babfa5e-44d2-4766-976c-54881af09657/gather/0.log" Feb 18 12:51:12 crc kubenswrapper[4717]: I0218 12:51:12.927855 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4xhlg/must-gather-rclqm"] Feb 18 12:51:12 crc kubenswrapper[4717]: I0218 12:51:12.928727 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4xhlg/must-gather-rclqm" podUID="8babfa5e-44d2-4766-976c-54881af09657" containerName="copy" containerID="cri-o://8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba" gracePeriod=2 Feb 18 12:51:12 crc kubenswrapper[4717]: I0218 12:51:12.935857 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4xhlg/must-gather-rclqm"] Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.381408 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4xhlg_must-gather-rclqm_8babfa5e-44d2-4766-976c-54881af09657/copy/0.log" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.382124 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/must-gather-rclqm" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.418927 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4xhlg_must-gather-rclqm_8babfa5e-44d2-4766-976c-54881af09657/copy/0.log" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.419359 4717 generic.go:334] "Generic (PLEG): container finished" podID="8babfa5e-44d2-4766-976c-54881af09657" containerID="8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba" exitCode=143 Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.419510 4717 scope.go:117] "RemoveContainer" containerID="8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.419717 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4xhlg/must-gather-rclqm" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.452538 4717 scope.go:117] "RemoveContainer" containerID="3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.472496 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8babfa5e-44d2-4766-976c-54881af09657-must-gather-output\") pod \"8babfa5e-44d2-4766-976c-54881af09657\" (UID: \"8babfa5e-44d2-4766-976c-54881af09657\") " Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.473075 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcgtf\" (UniqueName: \"kubernetes.io/projected/8babfa5e-44d2-4766-976c-54881af09657-kube-api-access-bcgtf\") pod \"8babfa5e-44d2-4766-976c-54881af09657\" (UID: \"8babfa5e-44d2-4766-976c-54881af09657\") " Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.482882 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8babfa5e-44d2-4766-976c-54881af09657-kube-api-access-bcgtf" (OuterVolumeSpecName: "kube-api-access-bcgtf") pod "8babfa5e-44d2-4766-976c-54881af09657" (UID: "8babfa5e-44d2-4766-976c-54881af09657"). InnerVolumeSpecName "kube-api-access-bcgtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.561700 4717 scope.go:117] "RemoveContainer" containerID="8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba" Feb 18 12:51:13 crc kubenswrapper[4717]: E0218 12:51:13.562576 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba\": container with ID starting with 8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba not found: ID does not exist" containerID="8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.562628 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba"} err="failed to get container status \"8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba\": rpc error: code = NotFound desc = could not find container \"8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba\": container with ID starting with 8065a4a587fa12dc19ea37233256de40481bbb546700ba3182eb5cf4c0c3dfba not found: ID does not exist" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.562678 4717 scope.go:117] "RemoveContainer" containerID="3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850" Feb 18 12:51:13 crc kubenswrapper[4717]: E0218 12:51:13.563206 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850\": container with ID starting with 3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850 not found: ID does not exist" containerID="3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.563404 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850"} err="failed to get container status \"3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850\": rpc error: code = NotFound desc = could not find container \"3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850\": container with ID starting with 3d9e32bebbc3aa58a566006906f58f9207b1327705302c0e7f72dca81bc49850 not found: ID does not exist" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.582061 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcgtf\" (UniqueName: \"kubernetes.io/projected/8babfa5e-44d2-4766-976c-54881af09657-kube-api-access-bcgtf\") on node \"crc\" DevicePath \"\"" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.647045 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8babfa5e-44d2-4766-976c-54881af09657-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8babfa5e-44d2-4766-976c-54881af09657" (UID: "8babfa5e-44d2-4766-976c-54881af09657"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:51:13 crc kubenswrapper[4717]: I0218 12:51:13.684284 4717 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8babfa5e-44d2-4766-976c-54881af09657-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 12:51:15 crc kubenswrapper[4717]: I0218 12:51:15.053348 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8babfa5e-44d2-4766-976c-54881af09657" path="/var/lib/kubelet/pods/8babfa5e-44d2-4766-976c-54881af09657/volumes" Feb 18 12:51:43 crc kubenswrapper[4717]: I0218 12:51:43.600513 4717 scope.go:117] "RemoveContainer" containerID="5d17df50486739edc52921fc6cd536bde9698b1401b485fbf4e1283df08184c2" Feb 18 12:51:46 crc kubenswrapper[4717]: I0218 12:51:46.038547 4717 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-j27f6" podUID="58fee9d2-2e42-46e4-b5a2-8b8c80a52424" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 18 12:52:42 crc kubenswrapper[4717]: I0218 12:52:42.773307 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:52:42 crc kubenswrapper[4717]: I0218 12:52:42.774055 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:53:12 crc kubenswrapper[4717]: I0218 12:53:12.774965 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:53:12 crc kubenswrapper[4717]: I0218 12:53:12.775741 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:53:42 crc kubenswrapper[4717]: I0218 12:53:42.773559 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 12:53:42 crc kubenswrapper[4717]: I0218 12:53:42.774372 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 12:53:42 crc kubenswrapper[4717]: I0218 12:53:42.774442 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 12:53:42 crc kubenswrapper[4717]: I0218 12:53:42.775445 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 12:53:42 crc kubenswrapper[4717]: I0218 12:53:42.775522 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" gracePeriod=600 Feb 18 12:53:42 crc kubenswrapper[4717]: E0218 12:53:42.904327 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:53:43 crc kubenswrapper[4717]: I0218 12:53:43.911160 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" exitCode=0 Feb 18 12:53:43 crc kubenswrapper[4717]: I0218 12:53:43.911230 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a"} Feb 18 12:53:43 crc kubenswrapper[4717]: I0218 12:53:43.911989 4717 scope.go:117] "RemoveContainer" containerID="003acb3fe0951f7b09611ee8f5a9d4d0657e29265ad2614b2da8e339d3d0ea1b" Feb 18 12:53:43 crc kubenswrapper[4717]: I0218 12:53:43.912584 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:53:43 crc kubenswrapper[4717]: E0218 12:53:43.912923 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:53:56 crc kubenswrapper[4717]: I0218 12:53:56.036862 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:53:56 crc kubenswrapper[4717]: E0218 12:53:56.039167 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:54:07 crc kubenswrapper[4717]: I0218 12:54:07.047488 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:54:07 crc kubenswrapper[4717]: E0218 12:54:07.048741 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:54:19 crc kubenswrapper[4717]: I0218 12:54:19.041403 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:54:19 crc kubenswrapper[4717]: E0218 12:54:19.042158 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.330498 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vsg65/must-gather-5j6qp"] Feb 18 12:54:22 crc kubenswrapper[4717]: E0218 12:54:22.331510 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerName="extract-content" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.331528 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerName="extract-content" Feb 18 12:54:22 crc kubenswrapper[4717]: E0218 12:54:22.331568 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerName="registry-server" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.331577 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerName="registry-server" Feb 18 12:54:22 crc kubenswrapper[4717]: E0218 12:54:22.331595 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8babfa5e-44d2-4766-976c-54881af09657" containerName="copy" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.331605 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="8babfa5e-44d2-4766-976c-54881af09657" containerName="copy" Feb 18 12:54:22 crc kubenswrapper[4717]: E0218 12:54:22.331629 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerName="extract-utilities" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.331638 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerName="extract-utilities" Feb 18 12:54:22 crc kubenswrapper[4717]: E0218 12:54:22.331667 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8babfa5e-44d2-4766-976c-54881af09657" containerName="gather" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.331675 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="8babfa5e-44d2-4766-976c-54881af09657" containerName="gather" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.331952 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="8babfa5e-44d2-4766-976c-54881af09657" containerName="gather" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.331990 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad45c28-db04-4b55-aa0a-35ad2d8f4d1e" containerName="registry-server" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.332008 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="8babfa5e-44d2-4766-976c-54881af09657" containerName="copy" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.333617 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/must-gather-5j6qp" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.336149 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vsg65"/"kube-root-ca.crt" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.336439 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vsg65"/"default-dockercfg-sstvv" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.336650 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vsg65"/"openshift-service-ca.crt" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.356343 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vsg65/must-gather-5j6qp"] Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.372999 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxfdz\" (UniqueName: \"kubernetes.io/projected/da82de6a-fa31-4823-ba62-4211efc7efe1-kube-api-access-xxfdz\") pod \"must-gather-5j6qp\" (UID: \"da82de6a-fa31-4823-ba62-4211efc7efe1\") " pod="openshift-must-gather-vsg65/must-gather-5j6qp" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.373068 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/da82de6a-fa31-4823-ba62-4211efc7efe1-must-gather-output\") pod \"must-gather-5j6qp\" (UID: \"da82de6a-fa31-4823-ba62-4211efc7efe1\") " pod="openshift-must-gather-vsg65/must-gather-5j6qp" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.475120 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxfdz\" (UniqueName: \"kubernetes.io/projected/da82de6a-fa31-4823-ba62-4211efc7efe1-kube-api-access-xxfdz\") pod \"must-gather-5j6qp\" (UID: \"da82de6a-fa31-4823-ba62-4211efc7efe1\") " pod="openshift-must-gather-vsg65/must-gather-5j6qp" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.475462 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/da82de6a-fa31-4823-ba62-4211efc7efe1-must-gather-output\") pod \"must-gather-5j6qp\" (UID: \"da82de6a-fa31-4823-ba62-4211efc7efe1\") " pod="openshift-must-gather-vsg65/must-gather-5j6qp" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.475878 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/da82de6a-fa31-4823-ba62-4211efc7efe1-must-gather-output\") pod \"must-gather-5j6qp\" (UID: \"da82de6a-fa31-4823-ba62-4211efc7efe1\") " pod="openshift-must-gather-vsg65/must-gather-5j6qp" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.495431 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxfdz\" (UniqueName: \"kubernetes.io/projected/da82de6a-fa31-4823-ba62-4211efc7efe1-kube-api-access-xxfdz\") pod \"must-gather-5j6qp\" (UID: \"da82de6a-fa31-4823-ba62-4211efc7efe1\") " pod="openshift-must-gather-vsg65/must-gather-5j6qp" Feb 18 12:54:22 crc kubenswrapper[4717]: I0218 12:54:22.666214 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/must-gather-5j6qp" Feb 18 12:54:23 crc kubenswrapper[4717]: I0218 12:54:23.208335 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vsg65/must-gather-5j6qp"] Feb 18 12:54:23 crc kubenswrapper[4717]: I0218 12:54:23.389720 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/must-gather-5j6qp" event={"ID":"da82de6a-fa31-4823-ba62-4211efc7efe1","Type":"ContainerStarted","Data":"b36696880d86a38ac67191a7587640cd1b6b9bbe78779e81df153e1459794a9e"} Feb 18 12:54:24 crc kubenswrapper[4717]: I0218 12:54:24.401191 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/must-gather-5j6qp" event={"ID":"da82de6a-fa31-4823-ba62-4211efc7efe1","Type":"ContainerStarted","Data":"8dde4249c34dcc7318277941bd1174a7a1e1a47987c9d0fc33372bc665d5faa3"} Feb 18 12:54:24 crc kubenswrapper[4717]: I0218 12:54:24.401637 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/must-gather-5j6qp" event={"ID":"da82de6a-fa31-4823-ba62-4211efc7efe1","Type":"ContainerStarted","Data":"988d5f33b9e8fdbca47bc5c736f222e682f08211dd169fcf0a086898eae0425e"} Feb 18 12:54:24 crc kubenswrapper[4717]: I0218 12:54:24.423178 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vsg65/must-gather-5j6qp" podStartSLOduration=2.423157023 podStartE2EDuration="2.423157023s" podCreationTimestamp="2026-02-18 12:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:54:24.417562352 +0000 UTC m=+3898.819663678" watchObservedRunningTime="2026-02-18 12:54:24.423157023 +0000 UTC m=+3898.825258339" Feb 18 12:54:27 crc kubenswrapper[4717]: I0218 12:54:27.360148 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vsg65/crc-debug-grnjk"] Feb 18 12:54:27 crc kubenswrapper[4717]: I0218 12:54:27.361895 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-grnjk" Feb 18 12:54:27 crc kubenswrapper[4717]: I0218 12:54:27.385791 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzh6n\" (UniqueName: \"kubernetes.io/projected/a0891f11-fdc5-49fa-992f-7f441bd4ae36-kube-api-access-nzh6n\") pod \"crc-debug-grnjk\" (UID: \"a0891f11-fdc5-49fa-992f-7f441bd4ae36\") " pod="openshift-must-gather-vsg65/crc-debug-grnjk" Feb 18 12:54:27 crc kubenswrapper[4717]: I0218 12:54:27.385967 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0891f11-fdc5-49fa-992f-7f441bd4ae36-host\") pod \"crc-debug-grnjk\" (UID: \"a0891f11-fdc5-49fa-992f-7f441bd4ae36\") " pod="openshift-must-gather-vsg65/crc-debug-grnjk" Feb 18 12:54:27 crc kubenswrapper[4717]: I0218 12:54:27.488733 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzh6n\" (UniqueName: \"kubernetes.io/projected/a0891f11-fdc5-49fa-992f-7f441bd4ae36-kube-api-access-nzh6n\") pod \"crc-debug-grnjk\" (UID: \"a0891f11-fdc5-49fa-992f-7f441bd4ae36\") " pod="openshift-must-gather-vsg65/crc-debug-grnjk" Feb 18 12:54:27 crc kubenswrapper[4717]: I0218 12:54:27.488898 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0891f11-fdc5-49fa-992f-7f441bd4ae36-host\") pod \"crc-debug-grnjk\" (UID: \"a0891f11-fdc5-49fa-992f-7f441bd4ae36\") " pod="openshift-must-gather-vsg65/crc-debug-grnjk" Feb 18 12:54:27 crc kubenswrapper[4717]: I0218 12:54:27.489084 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0891f11-fdc5-49fa-992f-7f441bd4ae36-host\") pod \"crc-debug-grnjk\" (UID: \"a0891f11-fdc5-49fa-992f-7f441bd4ae36\") " pod="openshift-must-gather-vsg65/crc-debug-grnjk" Feb 18 12:54:27 crc kubenswrapper[4717]: I0218 12:54:27.508866 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzh6n\" (UniqueName: \"kubernetes.io/projected/a0891f11-fdc5-49fa-992f-7f441bd4ae36-kube-api-access-nzh6n\") pod \"crc-debug-grnjk\" (UID: \"a0891f11-fdc5-49fa-992f-7f441bd4ae36\") " pod="openshift-must-gather-vsg65/crc-debug-grnjk" Feb 18 12:54:27 crc kubenswrapper[4717]: I0218 12:54:27.679833 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-grnjk" Feb 18 12:54:28 crc kubenswrapper[4717]: I0218 12:54:28.438593 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/crc-debug-grnjk" event={"ID":"a0891f11-fdc5-49fa-992f-7f441bd4ae36","Type":"ContainerStarted","Data":"4b98a3e31c9f6ed9b49639fc06430d0dad3220baa0d7147cc443aefd03d0fe75"} Feb 18 12:54:28 crc kubenswrapper[4717]: I0218 12:54:28.439230 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/crc-debug-grnjk" event={"ID":"a0891f11-fdc5-49fa-992f-7f441bd4ae36","Type":"ContainerStarted","Data":"fbb4cf05a2d00d56bb973aa8680640b0b6c5f1025889bf173ad364ce64b1f3c1"} Feb 18 12:54:28 crc kubenswrapper[4717]: I0218 12:54:28.457830 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vsg65/crc-debug-grnjk" podStartSLOduration=1.457806801 podStartE2EDuration="1.457806801s" podCreationTimestamp="2026-02-18 12:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 12:54:28.451996164 +0000 UTC m=+3902.854097480" watchObservedRunningTime="2026-02-18 12:54:28.457806801 +0000 UTC m=+3902.859908117" Feb 18 12:54:34 crc kubenswrapper[4717]: I0218 12:54:34.036686 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:54:34 crc kubenswrapper[4717]: E0218 12:54:34.037484 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:54:49 crc kubenswrapper[4717]: I0218 12:54:49.037135 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:54:49 crc kubenswrapper[4717]: E0218 12:54:49.038060 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:55:01 crc kubenswrapper[4717]: I0218 12:55:01.744807 4717 generic.go:334] "Generic (PLEG): container finished" podID="a0891f11-fdc5-49fa-992f-7f441bd4ae36" containerID="4b98a3e31c9f6ed9b49639fc06430d0dad3220baa0d7147cc443aefd03d0fe75" exitCode=0 Feb 18 12:55:01 crc kubenswrapper[4717]: I0218 12:55:01.744911 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/crc-debug-grnjk" event={"ID":"a0891f11-fdc5-49fa-992f-7f441bd4ae36","Type":"ContainerDied","Data":"4b98a3e31c9f6ed9b49639fc06430d0dad3220baa0d7147cc443aefd03d0fe75"} Feb 18 12:55:02 crc kubenswrapper[4717]: I0218 12:55:02.860731 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-grnjk" Feb 18 12:55:02 crc kubenswrapper[4717]: I0218 12:55:02.898178 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vsg65/crc-debug-grnjk"] Feb 18 12:55:02 crc kubenswrapper[4717]: I0218 12:55:02.909391 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vsg65/crc-debug-grnjk"] Feb 18 12:55:02 crc kubenswrapper[4717]: I0218 12:55:02.939460 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0891f11-fdc5-49fa-992f-7f441bd4ae36-host\") pod \"a0891f11-fdc5-49fa-992f-7f441bd4ae36\" (UID: \"a0891f11-fdc5-49fa-992f-7f441bd4ae36\") " Feb 18 12:55:02 crc kubenswrapper[4717]: I0218 12:55:02.939563 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzh6n\" (UniqueName: \"kubernetes.io/projected/a0891f11-fdc5-49fa-992f-7f441bd4ae36-kube-api-access-nzh6n\") pod \"a0891f11-fdc5-49fa-992f-7f441bd4ae36\" (UID: \"a0891f11-fdc5-49fa-992f-7f441bd4ae36\") " Feb 18 12:55:02 crc kubenswrapper[4717]: I0218 12:55:02.939595 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0891f11-fdc5-49fa-992f-7f441bd4ae36-host" (OuterVolumeSpecName: "host") pod "a0891f11-fdc5-49fa-992f-7f441bd4ae36" (UID: "a0891f11-fdc5-49fa-992f-7f441bd4ae36"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:55:02 crc kubenswrapper[4717]: I0218 12:55:02.940273 4717 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0891f11-fdc5-49fa-992f-7f441bd4ae36-host\") on node \"crc\" DevicePath \"\"" Feb 18 12:55:02 crc kubenswrapper[4717]: I0218 12:55:02.965119 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0891f11-fdc5-49fa-992f-7f441bd4ae36-kube-api-access-nzh6n" (OuterVolumeSpecName: "kube-api-access-nzh6n") pod "a0891f11-fdc5-49fa-992f-7f441bd4ae36" (UID: "a0891f11-fdc5-49fa-992f-7f441bd4ae36"). InnerVolumeSpecName "kube-api-access-nzh6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:55:03 crc kubenswrapper[4717]: I0218 12:55:03.037356 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:55:03 crc kubenswrapper[4717]: E0218 12:55:03.037604 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:55:03 crc kubenswrapper[4717]: I0218 12:55:03.042490 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzh6n\" (UniqueName: \"kubernetes.io/projected/a0891f11-fdc5-49fa-992f-7f441bd4ae36-kube-api-access-nzh6n\") on node \"crc\" DevicePath \"\"" Feb 18 12:55:03 crc kubenswrapper[4717]: I0218 12:55:03.048978 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0891f11-fdc5-49fa-992f-7f441bd4ae36" path="/var/lib/kubelet/pods/a0891f11-fdc5-49fa-992f-7f441bd4ae36/volumes" Feb 18 12:55:03 crc kubenswrapper[4717]: I0218 12:55:03.764892 4717 scope.go:117] "RemoveContainer" containerID="4b98a3e31c9f6ed9b49639fc06430d0dad3220baa0d7147cc443aefd03d0fe75" Feb 18 12:55:03 crc kubenswrapper[4717]: I0218 12:55:03.764918 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-grnjk" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.100466 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vsg65/crc-debug-t55mf"] Feb 18 12:55:04 crc kubenswrapper[4717]: E0218 12:55:04.102515 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0891f11-fdc5-49fa-992f-7f441bd4ae36" containerName="container-00" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.102613 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0891f11-fdc5-49fa-992f-7f441bd4ae36" containerName="container-00" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.103361 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0891f11-fdc5-49fa-992f-7f441bd4ae36" containerName="container-00" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.104281 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-t55mf" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.164059 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/90142745-f8cb-4e62-bfa2-8826836f7ceb-host\") pod \"crc-debug-t55mf\" (UID: \"90142745-f8cb-4e62-bfa2-8826836f7ceb\") " pod="openshift-must-gather-vsg65/crc-debug-t55mf" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.164112 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjsr7\" (UniqueName: \"kubernetes.io/projected/90142745-f8cb-4e62-bfa2-8826836f7ceb-kube-api-access-cjsr7\") pod \"crc-debug-t55mf\" (UID: \"90142745-f8cb-4e62-bfa2-8826836f7ceb\") " pod="openshift-must-gather-vsg65/crc-debug-t55mf" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.266305 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/90142745-f8cb-4e62-bfa2-8826836f7ceb-host\") pod \"crc-debug-t55mf\" (UID: \"90142745-f8cb-4e62-bfa2-8826836f7ceb\") " pod="openshift-must-gather-vsg65/crc-debug-t55mf" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.266356 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjsr7\" (UniqueName: \"kubernetes.io/projected/90142745-f8cb-4e62-bfa2-8826836f7ceb-kube-api-access-cjsr7\") pod \"crc-debug-t55mf\" (UID: \"90142745-f8cb-4e62-bfa2-8826836f7ceb\") " pod="openshift-must-gather-vsg65/crc-debug-t55mf" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.266453 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/90142745-f8cb-4e62-bfa2-8826836f7ceb-host\") pod \"crc-debug-t55mf\" (UID: \"90142745-f8cb-4e62-bfa2-8826836f7ceb\") " pod="openshift-must-gather-vsg65/crc-debug-t55mf" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.287183 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjsr7\" (UniqueName: \"kubernetes.io/projected/90142745-f8cb-4e62-bfa2-8826836f7ceb-kube-api-access-cjsr7\") pod \"crc-debug-t55mf\" (UID: \"90142745-f8cb-4e62-bfa2-8826836f7ceb\") " pod="openshift-must-gather-vsg65/crc-debug-t55mf" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.421415 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-t55mf" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.730223 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x9vh6"] Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.733079 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.744008 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x9vh6"] Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.778627 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fzzq\" (UniqueName: \"kubernetes.io/projected/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-kube-api-access-4fzzq\") pod \"community-operators-x9vh6\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.778678 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-catalog-content\") pod \"community-operators-x9vh6\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.778824 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-utilities\") pod \"community-operators-x9vh6\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.787624 4717 generic.go:334] "Generic (PLEG): container finished" podID="90142745-f8cb-4e62-bfa2-8826836f7ceb" containerID="94c6f94e24af1ab125c42c98503f671927a5aa33bb34f924fcbc207d2e9f08ef" exitCode=0 Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.787691 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/crc-debug-t55mf" event={"ID":"90142745-f8cb-4e62-bfa2-8826836f7ceb","Type":"ContainerDied","Data":"94c6f94e24af1ab125c42c98503f671927a5aa33bb34f924fcbc207d2e9f08ef"} Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.787718 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/crc-debug-t55mf" event={"ID":"90142745-f8cb-4e62-bfa2-8826836f7ceb","Type":"ContainerStarted","Data":"4a62ec500c3af2f2bb17762b1b2257c404e771f482b7cf78d2a754fb7d7a3750"} Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.880160 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-utilities\") pod \"community-operators-x9vh6\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.880324 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fzzq\" (UniqueName: \"kubernetes.io/projected/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-kube-api-access-4fzzq\") pod \"community-operators-x9vh6\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.880373 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-catalog-content\") pod \"community-operators-x9vh6\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.880636 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-utilities\") pod \"community-operators-x9vh6\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.880873 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-catalog-content\") pod \"community-operators-x9vh6\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:04 crc kubenswrapper[4717]: I0218 12:55:04.906732 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fzzq\" (UniqueName: \"kubernetes.io/projected/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-kube-api-access-4fzzq\") pod \"community-operators-x9vh6\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:05 crc kubenswrapper[4717]: I0218 12:55:05.119497 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:05 crc kubenswrapper[4717]: I0218 12:55:05.288096 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vsg65/crc-debug-t55mf"] Feb 18 12:55:05 crc kubenswrapper[4717]: I0218 12:55:05.329652 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vsg65/crc-debug-t55mf"] Feb 18 12:55:05 crc kubenswrapper[4717]: W0218 12:55:05.767414 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ee87a2_4d9d_48c2_aadf_a3339a66c717.slice/crio-f567552c950c12a4b554242626df13447c9274f792d83e72e1c7f546fa988557 WatchSource:0}: Error finding container f567552c950c12a4b554242626df13447c9274f792d83e72e1c7f546fa988557: Status 404 returned error can't find the container with id f567552c950c12a4b554242626df13447c9274f792d83e72e1c7f546fa988557 Feb 18 12:55:05 crc kubenswrapper[4717]: I0218 12:55:05.772818 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x9vh6"] Feb 18 12:55:05 crc kubenswrapper[4717]: I0218 12:55:05.803005 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9vh6" event={"ID":"d6ee87a2-4d9d-48c2-aadf-a3339a66c717","Type":"ContainerStarted","Data":"f567552c950c12a4b554242626df13447c9274f792d83e72e1c7f546fa988557"} Feb 18 12:55:05 crc kubenswrapper[4717]: I0218 12:55:05.973849 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-t55mf" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.109726 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/90142745-f8cb-4e62-bfa2-8826836f7ceb-host\") pod \"90142745-f8cb-4e62-bfa2-8826836f7ceb\" (UID: \"90142745-f8cb-4e62-bfa2-8826836f7ceb\") " Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.109844 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjsr7\" (UniqueName: \"kubernetes.io/projected/90142745-f8cb-4e62-bfa2-8826836f7ceb-kube-api-access-cjsr7\") pod \"90142745-f8cb-4e62-bfa2-8826836f7ceb\" (UID: \"90142745-f8cb-4e62-bfa2-8826836f7ceb\") " Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.110486 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90142745-f8cb-4e62-bfa2-8826836f7ceb-host" (OuterVolumeSpecName: "host") pod "90142745-f8cb-4e62-bfa2-8826836f7ceb" (UID: "90142745-f8cb-4e62-bfa2-8826836f7ceb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.110887 4717 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/90142745-f8cb-4e62-bfa2-8826836f7ceb-host\") on node \"crc\" DevicePath \"\"" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.119175 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90142745-f8cb-4e62-bfa2-8826836f7ceb-kube-api-access-cjsr7" (OuterVolumeSpecName: "kube-api-access-cjsr7") pod "90142745-f8cb-4e62-bfa2-8826836f7ceb" (UID: "90142745-f8cb-4e62-bfa2-8826836f7ceb"). InnerVolumeSpecName "kube-api-access-cjsr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.212648 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjsr7\" (UniqueName: \"kubernetes.io/projected/90142745-f8cb-4e62-bfa2-8826836f7ceb-kube-api-access-cjsr7\") on node \"crc\" DevicePath \"\"" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.631039 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vsg65/crc-debug-h2rk6"] Feb 18 12:55:06 crc kubenswrapper[4717]: E0218 12:55:06.632269 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90142745-f8cb-4e62-bfa2-8826836f7ceb" containerName="container-00" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.632353 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="90142745-f8cb-4e62-bfa2-8826836f7ceb" containerName="container-00" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.633707 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="90142745-f8cb-4e62-bfa2-8826836f7ceb" containerName="container-00" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.634555 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-h2rk6" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.721180 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ded89430-b756-49dd-acf9-d1ed6c6c502f-host\") pod \"crc-debug-h2rk6\" (UID: \"ded89430-b756-49dd-acf9-d1ed6c6c502f\") " pod="openshift-must-gather-vsg65/crc-debug-h2rk6" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.721588 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4c86\" (UniqueName: \"kubernetes.io/projected/ded89430-b756-49dd-acf9-d1ed6c6c502f-kube-api-access-m4c86\") pod \"crc-debug-h2rk6\" (UID: \"ded89430-b756-49dd-acf9-d1ed6c6c502f\") " pod="openshift-must-gather-vsg65/crc-debug-h2rk6" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.818006 4717 generic.go:334] "Generic (PLEG): container finished" podID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerID="cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307" exitCode=0 Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.818152 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9vh6" event={"ID":"d6ee87a2-4d9d-48c2-aadf-a3339a66c717","Type":"ContainerDied","Data":"cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307"} Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.821084 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a62ec500c3af2f2bb17762b1b2257c404e771f482b7cf78d2a754fb7d7a3750" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.821136 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.821167 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-t55mf" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.823045 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4c86\" (UniqueName: \"kubernetes.io/projected/ded89430-b756-49dd-acf9-d1ed6c6c502f-kube-api-access-m4c86\") pod \"crc-debug-h2rk6\" (UID: \"ded89430-b756-49dd-acf9-d1ed6c6c502f\") " pod="openshift-must-gather-vsg65/crc-debug-h2rk6" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.823318 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ded89430-b756-49dd-acf9-d1ed6c6c502f-host\") pod \"crc-debug-h2rk6\" (UID: \"ded89430-b756-49dd-acf9-d1ed6c6c502f\") " pod="openshift-must-gather-vsg65/crc-debug-h2rk6" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.823558 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ded89430-b756-49dd-acf9-d1ed6c6c502f-host\") pod \"crc-debug-h2rk6\" (UID: \"ded89430-b756-49dd-acf9-d1ed6c6c502f\") " pod="openshift-must-gather-vsg65/crc-debug-h2rk6" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.850480 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4c86\" (UniqueName: \"kubernetes.io/projected/ded89430-b756-49dd-acf9-d1ed6c6c502f-kube-api-access-m4c86\") pod \"crc-debug-h2rk6\" (UID: \"ded89430-b756-49dd-acf9-d1ed6c6c502f\") " pod="openshift-must-gather-vsg65/crc-debug-h2rk6" Feb 18 12:55:06 crc kubenswrapper[4717]: I0218 12:55:06.961770 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-h2rk6" Feb 18 12:55:07 crc kubenswrapper[4717]: W0218 12:55:07.003177 4717 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podded89430_b756_49dd_acf9_d1ed6c6c502f.slice/crio-72b280f03e111ac51e653845a6a8b55b86c6addfa37035ca5821b3b147698b9b WatchSource:0}: Error finding container 72b280f03e111ac51e653845a6a8b55b86c6addfa37035ca5821b3b147698b9b: Status 404 returned error can't find the container with id 72b280f03e111ac51e653845a6a8b55b86c6addfa37035ca5821b3b147698b9b Feb 18 12:55:07 crc kubenswrapper[4717]: I0218 12:55:07.057797 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90142745-f8cb-4e62-bfa2-8826836f7ceb" path="/var/lib/kubelet/pods/90142745-f8cb-4e62-bfa2-8826836f7ceb/volumes" Feb 18 12:55:07 crc kubenswrapper[4717]: I0218 12:55:07.834194 4717 generic.go:334] "Generic (PLEG): container finished" podID="ded89430-b756-49dd-acf9-d1ed6c6c502f" containerID="05b4c6fe75959608324eaceeba53542c8c2f1a9c721c6558a0c9375b85b2f51f" exitCode=0 Feb 18 12:55:07 crc kubenswrapper[4717]: I0218 12:55:07.834242 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/crc-debug-h2rk6" event={"ID":"ded89430-b756-49dd-acf9-d1ed6c6c502f","Type":"ContainerDied","Data":"05b4c6fe75959608324eaceeba53542c8c2f1a9c721c6558a0c9375b85b2f51f"} Feb 18 12:55:07 crc kubenswrapper[4717]: I0218 12:55:07.834500 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/crc-debug-h2rk6" event={"ID":"ded89430-b756-49dd-acf9-d1ed6c6c502f","Type":"ContainerStarted","Data":"72b280f03e111ac51e653845a6a8b55b86c6addfa37035ca5821b3b147698b9b"} Feb 18 12:55:07 crc kubenswrapper[4717]: I0218 12:55:07.875628 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vsg65/crc-debug-h2rk6"] Feb 18 12:55:07 crc kubenswrapper[4717]: I0218 12:55:07.885221 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vsg65/crc-debug-h2rk6"] Feb 18 12:55:08 crc kubenswrapper[4717]: I0218 12:55:08.844839 4717 generic.go:334] "Generic (PLEG): container finished" podID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerID="b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28" exitCode=0 Feb 18 12:55:08 crc kubenswrapper[4717]: I0218 12:55:08.844919 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9vh6" event={"ID":"d6ee87a2-4d9d-48c2-aadf-a3339a66c717","Type":"ContainerDied","Data":"b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28"} Feb 18 12:55:08 crc kubenswrapper[4717]: I0218 12:55:08.960255 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-h2rk6" Feb 18 12:55:09 crc kubenswrapper[4717]: I0218 12:55:09.067081 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4c86\" (UniqueName: \"kubernetes.io/projected/ded89430-b756-49dd-acf9-d1ed6c6c502f-kube-api-access-m4c86\") pod \"ded89430-b756-49dd-acf9-d1ed6c6c502f\" (UID: \"ded89430-b756-49dd-acf9-d1ed6c6c502f\") " Feb 18 12:55:09 crc kubenswrapper[4717]: I0218 12:55:09.067199 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ded89430-b756-49dd-acf9-d1ed6c6c502f-host\") pod \"ded89430-b756-49dd-acf9-d1ed6c6c502f\" (UID: \"ded89430-b756-49dd-acf9-d1ed6c6c502f\") " Feb 18 12:55:09 crc kubenswrapper[4717]: I0218 12:55:09.067252 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded89430-b756-49dd-acf9-d1ed6c6c502f-host" (OuterVolumeSpecName: "host") pod "ded89430-b756-49dd-acf9-d1ed6c6c502f" (UID: "ded89430-b756-49dd-acf9-d1ed6c6c502f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 12:55:09 crc kubenswrapper[4717]: I0218 12:55:09.067779 4717 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ded89430-b756-49dd-acf9-d1ed6c6c502f-host\") on node \"crc\" DevicePath \"\"" Feb 18 12:55:09 crc kubenswrapper[4717]: I0218 12:55:09.073544 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded89430-b756-49dd-acf9-d1ed6c6c502f-kube-api-access-m4c86" (OuterVolumeSpecName: "kube-api-access-m4c86") pod "ded89430-b756-49dd-acf9-d1ed6c6c502f" (UID: "ded89430-b756-49dd-acf9-d1ed6c6c502f"). InnerVolumeSpecName "kube-api-access-m4c86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:55:09 crc kubenswrapper[4717]: I0218 12:55:09.170102 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4c86\" (UniqueName: \"kubernetes.io/projected/ded89430-b756-49dd-acf9-d1ed6c6c502f-kube-api-access-m4c86\") on node \"crc\" DevicePath \"\"" Feb 18 12:55:09 crc kubenswrapper[4717]: I0218 12:55:09.856102 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/crc-debug-h2rk6" Feb 18 12:55:09 crc kubenswrapper[4717]: I0218 12:55:09.856119 4717 scope.go:117] "RemoveContainer" containerID="05b4c6fe75959608324eaceeba53542c8c2f1a9c721c6558a0c9375b85b2f51f" Feb 18 12:55:09 crc kubenswrapper[4717]: I0218 12:55:09.869546 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9vh6" event={"ID":"d6ee87a2-4d9d-48c2-aadf-a3339a66c717","Type":"ContainerStarted","Data":"087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222"} Feb 18 12:55:09 crc kubenswrapper[4717]: I0218 12:55:09.906136 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x9vh6" podStartSLOduration=3.398749828 podStartE2EDuration="5.90611296s" podCreationTimestamp="2026-02-18 12:55:04 +0000 UTC" firstStartedPulling="2026-02-18 12:55:06.820818607 +0000 UTC m=+3941.222919923" lastFinishedPulling="2026-02-18 12:55:09.328181739 +0000 UTC m=+3943.730283055" observedRunningTime="2026-02-18 12:55:09.897551374 +0000 UTC m=+3944.299652690" watchObservedRunningTime="2026-02-18 12:55:09.90611296 +0000 UTC m=+3944.308214276" Feb 18 12:55:11 crc kubenswrapper[4717]: I0218 12:55:11.047292 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded89430-b756-49dd-acf9-d1ed6c6c502f" path="/var/lib/kubelet/pods/ded89430-b756-49dd-acf9-d1ed6c6c502f/volumes" Feb 18 12:55:14 crc kubenswrapper[4717]: I0218 12:55:14.037724 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:55:14 crc kubenswrapper[4717]: E0218 12:55:14.038313 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:55:15 crc kubenswrapper[4717]: I0218 12:55:15.120951 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:15 crc kubenswrapper[4717]: I0218 12:55:15.121043 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:15 crc kubenswrapper[4717]: I0218 12:55:15.174134 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:15 crc kubenswrapper[4717]: I0218 12:55:15.970217 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:16 crc kubenswrapper[4717]: I0218 12:55:16.021942 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x9vh6"] Feb 18 12:55:17 crc kubenswrapper[4717]: I0218 12:55:17.950315 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x9vh6" podUID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerName="registry-server" containerID="cri-o://087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222" gracePeriod=2 Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.443839 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.568402 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-catalog-content\") pod \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.568660 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fzzq\" (UniqueName: \"kubernetes.io/projected/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-kube-api-access-4fzzq\") pod \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.568758 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-utilities\") pod \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\" (UID: \"d6ee87a2-4d9d-48c2-aadf-a3339a66c717\") " Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.570029 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-utilities" (OuterVolumeSpecName: "utilities") pod "d6ee87a2-4d9d-48c2-aadf-a3339a66c717" (UID: "d6ee87a2-4d9d-48c2-aadf-a3339a66c717"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.576030 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-kube-api-access-4fzzq" (OuterVolumeSpecName: "kube-api-access-4fzzq") pod "d6ee87a2-4d9d-48c2-aadf-a3339a66c717" (UID: "d6ee87a2-4d9d-48c2-aadf-a3339a66c717"). InnerVolumeSpecName "kube-api-access-4fzzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.622911 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6ee87a2-4d9d-48c2-aadf-a3339a66c717" (UID: "d6ee87a2-4d9d-48c2-aadf-a3339a66c717"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.671075 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fzzq\" (UniqueName: \"kubernetes.io/projected/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-kube-api-access-4fzzq\") on node \"crc\" DevicePath \"\"" Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.671117 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.671127 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ee87a2-4d9d-48c2-aadf-a3339a66c717-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.962729 4717 generic.go:334] "Generic (PLEG): container finished" podID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerID="087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222" exitCode=0 Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.962806 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9vh6" event={"ID":"d6ee87a2-4d9d-48c2-aadf-a3339a66c717","Type":"ContainerDied","Data":"087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222"} Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.962864 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x9vh6" event={"ID":"d6ee87a2-4d9d-48c2-aadf-a3339a66c717","Type":"ContainerDied","Data":"f567552c950c12a4b554242626df13447c9274f792d83e72e1c7f546fa988557"} Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.962870 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x9vh6" Feb 18 12:55:18 crc kubenswrapper[4717]: I0218 12:55:18.962891 4717 scope.go:117] "RemoveContainer" containerID="087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222" Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.003666 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x9vh6"] Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.004734 4717 scope.go:117] "RemoveContainer" containerID="b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28" Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.014500 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x9vh6"] Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.032112 4717 scope.go:117] "RemoveContainer" containerID="cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307" Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.279406 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" path="/var/lib/kubelet/pods/d6ee87a2-4d9d-48c2-aadf-a3339a66c717/volumes" Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.286306 4717 scope.go:117] "RemoveContainer" containerID="087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222" Feb 18 12:55:19 crc kubenswrapper[4717]: E0218 12:55:19.286955 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222\": container with ID starting with 087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222 not found: ID does not exist" containerID="087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222" Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.287000 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222"} err="failed to get container status \"087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222\": rpc error: code = NotFound desc = could not find container \"087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222\": container with ID starting with 087f79cf5c2c4e3f707a1a761ebf07602eb4303d2b5047db5478825788f3d222 not found: ID does not exist" Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.287038 4717 scope.go:117] "RemoveContainer" containerID="b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28" Feb 18 12:55:19 crc kubenswrapper[4717]: E0218 12:55:19.287700 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28\": container with ID starting with b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28 not found: ID does not exist" containerID="b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28" Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.287719 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28"} err="failed to get container status \"b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28\": rpc error: code = NotFound desc = could not find container \"b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28\": container with ID starting with b95fa26c396665471412dd87950463c6d335540031ce6274a11e320315c17c28 not found: ID does not exist" Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.287733 4717 scope.go:117] "RemoveContainer" containerID="cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307" Feb 18 12:55:19 crc kubenswrapper[4717]: E0218 12:55:19.288435 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307\": container with ID starting with cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307 not found: ID does not exist" containerID="cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307" Feb 18 12:55:19 crc kubenswrapper[4717]: I0218 12:55:19.288463 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307"} err="failed to get container status \"cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307\": rpc error: code = NotFound desc = could not find container \"cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307\": container with ID starting with cc989a4a104fa2ff09e3562853bdff54ed404d9c6e9c9c79f6d5e88f2ba72307 not found: ID does not exist" Feb 18 12:55:27 crc kubenswrapper[4717]: I0218 12:55:27.061958 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:55:27 crc kubenswrapper[4717]: E0218 12:55:27.063058 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:55:40 crc kubenswrapper[4717]: I0218 12:55:40.036763 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:55:40 crc kubenswrapper[4717]: E0218 12:55:40.038115 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:55:40 crc kubenswrapper[4717]: I0218 12:55:40.914579 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55d8f77d98-h2lh4_afc0d8b7-77e8-4fa8-8fea-70d32de7045c/barbican-api/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.071678 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b857cc988-tklbq_42c182ca-f340-4ddf-ac38-b5eba6d9dbe5/barbican-keystone-listener/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.118686 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-55d8f77d98-h2lh4_afc0d8b7-77e8-4fa8-8fea-70d32de7045c/barbican-api-log/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.248569 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b857cc988-tklbq_42c182ca-f340-4ddf-ac38-b5eba6d9dbe5/barbican-keystone-listener-log/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.346773 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8649d4b975-zblq7_45e47daf-054d-4262-b76c-349fb97ec950/barbican-worker/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.360386 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8649d4b975-zblq7_45e47daf-054d-4262-b76c-349fb97ec950/barbican-worker-log/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.543174 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-cj6pg_ca940f40-1894-4b6a-bb57-acac20cd47f3/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.642067 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_73da3578-c044-4462-9bdc-0985effde3bf/ceilometer-central-agent/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.687127 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_73da3578-c044-4462-9bdc-0985effde3bf/ceilometer-notification-agent/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.753696 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_73da3578-c044-4462-9bdc-0985effde3bf/proxy-httpd/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.768420 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_73da3578-c044-4462-9bdc-0985effde3bf/sg-core/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.949248 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_756321c7-681b-4183-b0ef-4afab35a28ae/cinder-api/0.log" Feb 18 12:55:41 crc kubenswrapper[4717]: I0218 12:55:41.969945 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_756321c7-681b-4183-b0ef-4afab35a28ae/cinder-api-log/0.log" Feb 18 12:55:42 crc kubenswrapper[4717]: I0218 12:55:42.101640 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_96f31374-cbb7-49a4-878b-5667b60b960e/cinder-scheduler/0.log" Feb 18 12:55:42 crc kubenswrapper[4717]: I0218 12:55:42.179006 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_96f31374-cbb7-49a4-878b-5667b60b960e/probe/0.log" Feb 18 12:55:42 crc kubenswrapper[4717]: I0218 12:55:42.253496 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7p7dk_16dc15ac-c4c3-4d90-8fd2-20054f92b894/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:42 crc kubenswrapper[4717]: I0218 12:55:42.408213 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-l8fzx_38e6fde5-49a9-46e9-bb5d-382aaf00efa7/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:42 crc kubenswrapper[4717]: I0218 12:55:42.448646 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-q9j9f_dffcf399-439f-4698-8a0a-b247675685be/init/0.log" Feb 18 12:55:42 crc kubenswrapper[4717]: I0218 12:55:42.619277 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-q9j9f_dffcf399-439f-4698-8a0a-b247675685be/init/0.log" Feb 18 12:55:42 crc kubenswrapper[4717]: I0218 12:55:42.681357 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-dfh57_57c4f818-2860-43d9-9c1b-f99b48449af0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:42 crc kubenswrapper[4717]: I0218 12:55:42.693157 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-q9j9f_dffcf399-439f-4698-8a0a-b247675685be/dnsmasq-dns/0.log" Feb 18 12:55:42 crc kubenswrapper[4717]: I0218 12:55:42.906543 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_05e71379-15cf-4f83-a548-a46ba29caada/glance-log/0.log" Feb 18 12:55:42 crc kubenswrapper[4717]: I0218 12:55:42.911679 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_05e71379-15cf-4f83-a548-a46ba29caada/glance-httpd/0.log" Feb 18 12:55:43 crc kubenswrapper[4717]: I0218 12:55:43.059404 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_71fd55dc-beb4-4d07-af77-f244d5b1d399/glance-httpd/0.log" Feb 18 12:55:43 crc kubenswrapper[4717]: I0218 12:55:43.091942 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_71fd55dc-beb4-4d07-af77-f244d5b1d399/glance-log/0.log" Feb 18 12:55:43 crc kubenswrapper[4717]: I0218 12:55:43.259383 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d949b4564-9ns6m_72d097e6-a40b-4e3e-8376-f3866f63e9d3/horizon/0.log" Feb 18 12:55:43 crc kubenswrapper[4717]: I0218 12:55:43.399836 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fl2n5_fd992beb-fe74-4076-8233-3bdc67b5de99/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:43 crc kubenswrapper[4717]: I0218 12:55:43.622138 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8dcqq_a2da22c1-d4ae-46e5-919a-b69a2a8807e1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:43 crc kubenswrapper[4717]: I0218 12:55:43.640687 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-d949b4564-9ns6m_72d097e6-a40b-4e3e-8376-f3866f63e9d3/horizon-log/0.log" Feb 18 12:55:43 crc kubenswrapper[4717]: I0218 12:55:43.881708 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_71b7862b-19b8-4921-955c-4948b428f4eb/kube-state-metrics/0.log" Feb 18 12:55:43 crc kubenswrapper[4717]: I0218 12:55:43.983116 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-766b8ffffc-2xlnb_1554ac8b-466a-47d1-a768-b250ee1ca204/keystone-api/0.log" Feb 18 12:55:44 crc kubenswrapper[4717]: I0218 12:55:44.151837 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mtnsj_6e45806a-5dfc-4368-b276-a59ba198f17e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:44 crc kubenswrapper[4717]: I0218 12:55:44.516219 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bf89fd777-rchb4_a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c/neutron-httpd/0.log" Feb 18 12:55:44 crc kubenswrapper[4717]: I0218 12:55:44.541502 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bf89fd777-rchb4_a13763c0-39c7-48b0-8e5a-a7b9b2bfaf7c/neutron-api/0.log" Feb 18 12:55:44 crc kubenswrapper[4717]: I0218 12:55:44.740792 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-x9st6_3ef3746b-6714-4c50-b0fe-3d5d1632f6c6/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:45 crc kubenswrapper[4717]: I0218 12:55:45.344060 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2b4f4f54-2066-4277-a45f-aefd9dc8130c/nova-api-log/0.log" Feb 18 12:55:45 crc kubenswrapper[4717]: I0218 12:55:45.483857 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fb3569ab-95c9-42eb-9c7d-979b7c09f862/nova-cell0-conductor-conductor/0.log" Feb 18 12:55:45 crc kubenswrapper[4717]: I0218 12:55:45.665229 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2b4f4f54-2066-4277-a45f-aefd9dc8130c/nova-api-api/0.log" Feb 18 12:55:45 crc kubenswrapper[4717]: I0218 12:55:45.752815 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_377276ef-9093-4bae-954b-b833c89261ea/nova-cell1-conductor-conductor/0.log" Feb 18 12:55:45 crc kubenswrapper[4717]: I0218 12:55:45.862245 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c95e107a-542e-4a31-98f9-aed639f1fc42/nova-cell1-novncproxy-novncproxy/0.log" Feb 18 12:55:46 crc kubenswrapper[4717]: I0218 12:55:46.018536 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-cgnxc_2facfe0c-cdcb-44fb-ab60-197e5cf58fb3/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:46 crc kubenswrapper[4717]: I0218 12:55:46.203653 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc/nova-metadata-log/0.log" Feb 18 12:55:46 crc kubenswrapper[4717]: I0218 12:55:46.533934 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e9941a28-9836-48c1-bab2-c55c92861692/nova-scheduler-scheduler/0.log" Feb 18 12:55:46 crc kubenswrapper[4717]: I0218 12:55:46.585389 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6ba708c6-57e5-4406-8773-2a700b0be0fc/mysql-bootstrap/0.log" Feb 18 12:55:46 crc kubenswrapper[4717]: I0218 12:55:46.814170 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6ba708c6-57e5-4406-8773-2a700b0be0fc/mysql-bootstrap/0.log" Feb 18 12:55:46 crc kubenswrapper[4717]: I0218 12:55:46.821996 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6ba708c6-57e5-4406-8773-2a700b0be0fc/galera/0.log" Feb 18 12:55:47 crc kubenswrapper[4717]: I0218 12:55:47.040745 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9e12df56-53ef-42bc-9f15-2c7a89b391d1/mysql-bootstrap/0.log" Feb 18 12:55:47 crc kubenswrapper[4717]: I0218 12:55:47.521901 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9e12df56-53ef-42bc-9f15-2c7a89b391d1/mysql-bootstrap/0.log" Feb 18 12:55:47 crc kubenswrapper[4717]: I0218 12:55:47.578303 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9e12df56-53ef-42bc-9f15-2c7a89b391d1/galera/0.log" Feb 18 12:55:47 crc kubenswrapper[4717]: I0218 12:55:47.712376 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9b4ef341-6659-4283-81b4-78674dfd9fc8/openstackclient/0.log" Feb 18 12:55:47 crc kubenswrapper[4717]: I0218 12:55:47.719737 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0e3146ec-b27b-4767-9f6f-2bc5b6cb92cc/nova-metadata-metadata/0.log" Feb 18 12:55:47 crc kubenswrapper[4717]: I0218 12:55:47.967041 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cqvjv_6e3a25d1-3ad3-4ecb-bca6-84643516d734/ovn-controller/0.log" Feb 18 12:55:47 crc kubenswrapper[4717]: I0218 12:55:47.996079 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-htmxg_b720848b-1453-4de9-982e-de66099bb8f7/openstack-network-exporter/0.log" Feb 18 12:55:48 crc kubenswrapper[4717]: I0218 12:55:48.156187 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzbgg_d145d1aa-1d6c-4285-9670-42d3bb4ea1cd/ovsdb-server-init/0.log" Feb 18 12:55:48 crc kubenswrapper[4717]: I0218 12:55:48.677845 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzbgg_d145d1aa-1d6c-4285-9670-42d3bb4ea1cd/ovs-vswitchd/0.log" Feb 18 12:55:48 crc kubenswrapper[4717]: I0218 12:55:48.698042 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzbgg_d145d1aa-1d6c-4285-9670-42d3bb4ea1cd/ovsdb-server/0.log" Feb 18 12:55:48 crc kubenswrapper[4717]: I0218 12:55:48.700455 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zzbgg_d145d1aa-1d6c-4285-9670-42d3bb4ea1cd/ovsdb-server-init/0.log" Feb 18 12:55:48 crc kubenswrapper[4717]: I0218 12:55:48.927522 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-pnbv9_5cdc94da-0dcb-40a3-9800-2655bae295cb/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:48 crc kubenswrapper[4717]: I0218 12:55:48.996978 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_31402202-302e-46a9-b565-d7b8143153d5/ovn-northd/0.log" Feb 18 12:55:49 crc kubenswrapper[4717]: I0218 12:55:49.018765 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_31402202-302e-46a9-b565-d7b8143153d5/openstack-network-exporter/0.log" Feb 18 12:55:49 crc kubenswrapper[4717]: I0218 12:55:49.138661 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_77683781-6580-4589-8869-bbaea0d6d8a0/openstack-network-exporter/0.log" Feb 18 12:55:49 crc kubenswrapper[4717]: I0218 12:55:49.200133 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_77683781-6580-4589-8869-bbaea0d6d8a0/ovsdbserver-nb/0.log" Feb 18 12:55:49 crc kubenswrapper[4717]: I0218 12:55:49.755074 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_41966c80-d352-4f94-b011-1ef922e3250f/ovsdbserver-sb/0.log" Feb 18 12:55:49 crc kubenswrapper[4717]: I0218 12:55:49.842639 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_41966c80-d352-4f94-b011-1ef922e3250f/openstack-network-exporter/0.log" Feb 18 12:55:49 crc kubenswrapper[4717]: I0218 12:55:49.995662 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-765f565894-lj9d4_94b4824f-c89e-4740-ab96-6a36d2f7abb7/placement-api/0.log" Feb 18 12:55:50 crc kubenswrapper[4717]: I0218 12:55:50.136897 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1c31df4b-bbb2-4bdf-9c36-db03b261067c/setup-container/0.log" Feb 18 12:55:50 crc kubenswrapper[4717]: I0218 12:55:50.162674 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-765f565894-lj9d4_94b4824f-c89e-4740-ab96-6a36d2f7abb7/placement-log/0.log" Feb 18 12:55:50 crc kubenswrapper[4717]: I0218 12:55:50.328742 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1c31df4b-bbb2-4bdf-9c36-db03b261067c/setup-container/0.log" Feb 18 12:55:50 crc kubenswrapper[4717]: I0218 12:55:50.354889 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1c31df4b-bbb2-4bdf-9c36-db03b261067c/rabbitmq/0.log" Feb 18 12:55:50 crc kubenswrapper[4717]: I0218 12:55:50.438696 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_57c90bed-ebc6-4053-b92b-1622edda048a/setup-container/0.log" Feb 18 12:55:50 crc kubenswrapper[4717]: I0218 12:55:50.650125 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_57c90bed-ebc6-4053-b92b-1622edda048a/setup-container/0.log" Feb 18 12:55:50 crc kubenswrapper[4717]: I0218 12:55:50.681209 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nslld_0e91bf1c-6032-4f2a-9b0d-2d64c33c2d47/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:50 crc kubenswrapper[4717]: I0218 12:55:50.745397 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_57c90bed-ebc6-4053-b92b-1622edda048a/rabbitmq/0.log" Feb 18 12:55:50 crc kubenswrapper[4717]: I0218 12:55:50.875116 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-tjgb9_51e7bb49-4d7d-44a3-bb44-dcf18ac0f219/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:50 crc kubenswrapper[4717]: I0218 12:55:50.976572 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mzv27_74c7ea9f-0f71-44a4-b3cb-8fd20e90f456/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:51 crc kubenswrapper[4717]: I0218 12:55:51.175100 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-kjq2f_32a22361-b7f3-4429-a590-edecf026891c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:51 crc kubenswrapper[4717]: I0218 12:55:51.252320 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-98rv7_c1782f70-1ae6-42da-98ad-f42f2495b261/ssh-known-hosts-edpm-deployment/0.log" Feb 18 12:55:51 crc kubenswrapper[4717]: I0218 12:55:51.767114 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-586778dd75-mtms6_e21881f2-73fb-4d0f-974c-a74694a2b301/proxy-server/0.log" Feb 18 12:55:51 crc kubenswrapper[4717]: I0218 12:55:51.825383 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-586778dd75-mtms6_e21881f2-73fb-4d0f-974c-a74694a2b301/proxy-httpd/0.log" Feb 18 12:55:51 crc kubenswrapper[4717]: I0218 12:55:51.839735 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-66d7l_ac3ad8c1-04a7-46f5-9c76-98c92e3c2158/swift-ring-rebalance/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.031236 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/account-auditor/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.036219 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:55:52 crc kubenswrapper[4717]: E0218 12:55:52.036611 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.045227 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/account-reaper/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.106422 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/account-replicator/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.245209 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/account-server/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.278230 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/container-auditor/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.373763 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/container-server/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.385815 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/container-replicator/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.501487 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/container-updater/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.539050 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/object-auditor/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.634462 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/object-expirer/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.700822 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/object-server/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.700965 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/object-replicator/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.777922 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/object-updater/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.892510 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/rsync/0.log" Feb 18 12:55:52 crc kubenswrapper[4717]: I0218 12:55:52.955726 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9102b2e6-400b-4ba1-97a2-eb5be85f778a/swift-recon-cron/0.log" Feb 18 12:55:53 crc kubenswrapper[4717]: I0218 12:55:53.127728 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-zv8sv_95c3f30f-0b5a-4b57-ad1e-61cfd2b1f963/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:55:53 crc kubenswrapper[4717]: I0218 12:55:53.260635 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_00f1b8ee-1760-4308-b796-155234b0a811/tempest-tests-tempest-tests-runner/0.log" Feb 18 12:55:53 crc kubenswrapper[4717]: I0218 12:55:53.372018 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_67131484-bd44-40b0-92da-d06886a8179b/test-operator-logs-container/0.log" Feb 18 12:55:53 crc kubenswrapper[4717]: I0218 12:55:53.549657 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5sft2_2a2cdadf-7168-4073-ac4d-68893d4f61de/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 12:56:03 crc kubenswrapper[4717]: I0218 12:56:03.411168 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_760991b3-fcd6-4ea6-bc3b-3fad54f0c70c/memcached/0.log" Feb 18 12:56:07 crc kubenswrapper[4717]: I0218 12:56:07.043636 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:56:07 crc kubenswrapper[4717]: E0218 12:56:07.044486 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:56:22 crc kubenswrapper[4717]: I0218 12:56:22.037229 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:56:22 crc kubenswrapper[4717]: E0218 12:56:22.038035 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:56:22 crc kubenswrapper[4717]: I0218 12:56:22.569129 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/util/0.log" Feb 18 12:56:22 crc kubenswrapper[4717]: I0218 12:56:22.703880 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/pull/0.log" Feb 18 12:56:22 crc kubenswrapper[4717]: I0218 12:56:22.711128 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/util/0.log" Feb 18 12:56:22 crc kubenswrapper[4717]: I0218 12:56:22.749100 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/pull/0.log" Feb 18 12:56:23 crc kubenswrapper[4717]: I0218 12:56:23.105155 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/pull/0.log" Feb 18 12:56:23 crc kubenswrapper[4717]: I0218 12:56:23.177950 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/util/0.log" Feb 18 12:56:23 crc kubenswrapper[4717]: I0218 12:56:23.213210 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_556b6a8be8538222565428e7e3d6cf6fa89fa3673d2ce92923b146d72186sh8_8a937a28-c870-4dde-a3ee-ebb15180d623/extract/0.log" Feb 18 12:56:23 crc kubenswrapper[4717]: I0218 12:56:23.725726 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-2n2t2_986ac762-6758-4402-a5c9-849780ff7fab/manager/0.log" Feb 18 12:56:24 crc kubenswrapper[4717]: I0218 12:56:24.096103 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-v6qrx_ba6c36b6-f446-4cc2-b1b0-9ebb6146dfde/manager/0.log" Feb 18 12:56:24 crc kubenswrapper[4717]: I0218 12:56:24.310981 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-9cnsb_f9800e95-aed6-4d9b-9e88-b6a5f303ee16/manager/0.log" Feb 18 12:56:24 crc kubenswrapper[4717]: I0218 12:56:24.576004 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-rvxd2_e82b0608-77fd-4e73-bafb-00a7b43b6299/manager/0.log" Feb 18 12:56:25 crc kubenswrapper[4717]: I0218 12:56:25.001675 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-ldqdh_b6f768c0-a04d-49f7-a0ae-5ea5ee09c26b/manager/0.log" Feb 18 12:56:25 crc kubenswrapper[4717]: I0218 12:56:25.255017 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-szzvb_96c16cf0-31b6-4830-b92f-f25b4ce11979/manager/0.log" Feb 18 12:56:25 crc kubenswrapper[4717]: I0218 12:56:25.585391 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-2lmml_8d4a2d32-4724-4580-a542-7552e580ed15/manager/0.log" Feb 18 12:56:25 crc kubenswrapper[4717]: I0218 12:56:25.658569 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-nljkj_927be7f4-3bc1-42c8-917f-8b898bbbc21a/manager/0.log" Feb 18 12:56:25 crc kubenswrapper[4717]: I0218 12:56:25.813174 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-jhnvm_3503ed6a-e486-404f-8ac3-df63d9d28c2d/manager/0.log" Feb 18 12:56:25 crc kubenswrapper[4717]: I0218 12:56:25.937798 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-sl49j_a14214f1-4961-4ade-ba45-d48139b6fd0d/manager/0.log" Feb 18 12:56:26 crc kubenswrapper[4717]: I0218 12:56:26.212927 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-52fmp_5d07e1a5-0372-4721-ac7a-66c568e32be1/manager/0.log" Feb 18 12:56:26 crc kubenswrapper[4717]: I0218 12:56:26.464803 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-xdtl8_7c5e0309-c138-4668-bad9-eacff0124d24/manager/0.log" Feb 18 12:56:26 crc kubenswrapper[4717]: I0218 12:56:26.699123 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cp9l24_6dc32a01-ea52-421a-8cca-d3a2d5d6e7fe/manager/0.log" Feb 18 12:56:27 crc kubenswrapper[4717]: I0218 12:56:27.132607 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-84d9946dcc-wjrpz_0807bf80-9dc3-48d9-8cbe-748f85b2089f/operator/0.log" Feb 18 12:56:27 crc kubenswrapper[4717]: I0218 12:56:27.429502 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6gp6j_0b3c9586-d52a-4df4-a96f-91773c3bfbfa/registry-server/0.log" Feb 18 12:56:27 crc kubenswrapper[4717]: I0218 12:56:27.722382 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-lx8wj_196844a3-3220-4557-93a1-dc0887bbb53f/manager/0.log" Feb 18 12:56:28 crc kubenswrapper[4717]: I0218 12:56:28.295978 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-j9pkf_3b988944-4f1b-4fb3-89ff-b1a0e61853dc/manager/0.log" Feb 18 12:56:28 crc kubenswrapper[4717]: I0218 12:56:28.536897 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jrzq4_089bc44f-bd8a-45b5-a497-17cfc2d38bee/operator/0.log" Feb 18 12:56:28 crc kubenswrapper[4717]: I0218 12:56:28.752915 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-26mhv_3faac3ae-2788-4a36-8241-09a601267885/manager/0.log" Feb 18 12:56:29 crc kubenswrapper[4717]: I0218 12:56:29.082087 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-cmrvc_e95271e1-5edd-4862-9dd9-e7ad1feb0ed0/manager/0.log" Feb 18 12:56:29 crc kubenswrapper[4717]: I0218 12:56:29.182357 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-846fd54586-rqvhv_fa9ea26a-44d8-4c4d-8766-d1c19fa59d70/manager/0.log" Feb 18 12:56:29 crc kubenswrapper[4717]: I0218 12:56:29.289900 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-n7n5r_88c2fec0-988b-4496-b054-43f965e23324/manager/0.log" Feb 18 12:56:29 crc kubenswrapper[4717]: I0218 12:56:29.304988 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-9tf7v_e2e22987-3a27-4550-8593-c54e5628e941/manager/0.log" Feb 18 12:56:29 crc kubenswrapper[4717]: I0218 12:56:29.421059 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-h59sk_886d7474-df3b-4777-bd48-d3bf188f7fc9/manager/0.log" Feb 18 12:56:33 crc kubenswrapper[4717]: I0218 12:56:33.036188 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:56:33 crc kubenswrapper[4717]: E0218 12:56:33.036897 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:56:34 crc kubenswrapper[4717]: I0218 12:56:34.275735 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-5lvkl_eca115e0-882d-4173-a714-1883215088b5/manager/0.log" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.822606 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z85mc"] Feb 18 12:56:37 crc kubenswrapper[4717]: E0218 12:56:37.823856 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerName="extract-content" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.823873 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerName="extract-content" Feb 18 12:56:37 crc kubenswrapper[4717]: E0218 12:56:37.823888 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerName="registry-server" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.823895 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerName="registry-server" Feb 18 12:56:37 crc kubenswrapper[4717]: E0218 12:56:37.823905 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerName="extract-utilities" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.823912 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerName="extract-utilities" Feb 18 12:56:37 crc kubenswrapper[4717]: E0218 12:56:37.823944 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded89430-b756-49dd-acf9-d1ed6c6c502f" containerName="container-00" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.823950 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded89430-b756-49dd-acf9-d1ed6c6c502f" containerName="container-00" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.824170 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ee87a2-4d9d-48c2-aadf-a3339a66c717" containerName="registry-server" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.824189 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded89430-b756-49dd-acf9-d1ed6c6c502f" containerName="container-00" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.825706 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.840416 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z85mc"] Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.905667 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-catalog-content\") pod \"redhat-operators-z85mc\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.905950 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-utilities\") pod \"redhat-operators-z85mc\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:37 crc kubenswrapper[4717]: I0218 12:56:37.906037 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvrl\" (UniqueName: \"kubernetes.io/projected/6d5c173e-5ec2-44a0-b6df-a61172801628-kube-api-access-9kvrl\") pod \"redhat-operators-z85mc\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:38 crc kubenswrapper[4717]: I0218 12:56:38.008223 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kvrl\" (UniqueName: \"kubernetes.io/projected/6d5c173e-5ec2-44a0-b6df-a61172801628-kube-api-access-9kvrl\") pod \"redhat-operators-z85mc\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:38 crc kubenswrapper[4717]: I0218 12:56:38.008422 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-catalog-content\") pod \"redhat-operators-z85mc\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:38 crc kubenswrapper[4717]: I0218 12:56:38.008467 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-utilities\") pod \"redhat-operators-z85mc\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:38 crc kubenswrapper[4717]: I0218 12:56:38.008996 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-utilities\") pod \"redhat-operators-z85mc\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:38 crc kubenswrapper[4717]: I0218 12:56:38.008996 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-catalog-content\") pod \"redhat-operators-z85mc\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:38 crc kubenswrapper[4717]: I0218 12:56:38.447117 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kvrl\" (UniqueName: \"kubernetes.io/projected/6d5c173e-5ec2-44a0-b6df-a61172801628-kube-api-access-9kvrl\") pod \"redhat-operators-z85mc\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:38 crc kubenswrapper[4717]: I0218 12:56:38.489470 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:39 crc kubenswrapper[4717]: I0218 12:56:39.110873 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z85mc"] Feb 18 12:56:39 crc kubenswrapper[4717]: I0218 12:56:39.748398 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z85mc" event={"ID":"6d5c173e-5ec2-44a0-b6df-a61172801628","Type":"ContainerDied","Data":"1ee51cb80a1d628c66c2626408cf21e7d8a1ab4461f70a4ac87d5374f2873845"} Feb 18 12:56:39 crc kubenswrapper[4717]: I0218 12:56:39.748763 4717 generic.go:334] "Generic (PLEG): container finished" podID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerID="1ee51cb80a1d628c66c2626408cf21e7d8a1ab4461f70a4ac87d5374f2873845" exitCode=0 Feb 18 12:56:39 crc kubenswrapper[4717]: I0218 12:56:39.748800 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z85mc" event={"ID":"6d5c173e-5ec2-44a0-b6df-a61172801628","Type":"ContainerStarted","Data":"e75f66cb607cea361357bf74f9911c3047710f283aa38b48aa946a15e3cbed93"} Feb 18 12:56:40 crc kubenswrapper[4717]: I0218 12:56:40.758380 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z85mc" event={"ID":"6d5c173e-5ec2-44a0-b6df-a61172801628","Type":"ContainerStarted","Data":"0b719b635fdade88f988b5f64c0db0bf91613aa0c2a6e94808599b7b8be997cf"} Feb 18 12:56:42 crc kubenswrapper[4717]: I0218 12:56:42.781748 4717 generic.go:334] "Generic (PLEG): container finished" podID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerID="0b719b635fdade88f988b5f64c0db0bf91613aa0c2a6e94808599b7b8be997cf" exitCode=0 Feb 18 12:56:42 crc kubenswrapper[4717]: I0218 12:56:42.781837 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z85mc" event={"ID":"6d5c173e-5ec2-44a0-b6df-a61172801628","Type":"ContainerDied","Data":"0b719b635fdade88f988b5f64c0db0bf91613aa0c2a6e94808599b7b8be997cf"} Feb 18 12:56:43 crc kubenswrapper[4717]: I0218 12:56:43.800914 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z85mc" event={"ID":"6d5c173e-5ec2-44a0-b6df-a61172801628","Type":"ContainerStarted","Data":"9108cc889a76aeee42459258443454c5ca7b3b9dbc796020f383f6a9cd89ba29"} Feb 18 12:56:43 crc kubenswrapper[4717]: I0218 12:56:43.821607 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z85mc" podStartSLOduration=3.337368876 podStartE2EDuration="6.821585214s" podCreationTimestamp="2026-02-18 12:56:37 +0000 UTC" firstStartedPulling="2026-02-18 12:56:39.751026093 +0000 UTC m=+4034.153127409" lastFinishedPulling="2026-02-18 12:56:43.235242401 +0000 UTC m=+4037.637343747" observedRunningTime="2026-02-18 12:56:43.819096012 +0000 UTC m=+4038.221197328" watchObservedRunningTime="2026-02-18 12:56:43.821585214 +0000 UTC m=+4038.223686530" Feb 18 12:56:45 crc kubenswrapper[4717]: I0218 12:56:45.042454 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:56:45 crc kubenswrapper[4717]: E0218 12:56:45.045347 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:56:48 crc kubenswrapper[4717]: I0218 12:56:48.489739 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:48 crc kubenswrapper[4717]: I0218 12:56:48.492078 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:49 crc kubenswrapper[4717]: I0218 12:56:49.545493 4717 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z85mc" podUID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerName="registry-server" probeResult="failure" output=< Feb 18 12:56:49 crc kubenswrapper[4717]: timeout: failed to connect service ":50051" within 1s Feb 18 12:56:49 crc kubenswrapper[4717]: > Feb 18 12:56:53 crc kubenswrapper[4717]: I0218 12:56:53.970889 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jmlmn_c53aaac1-4a8c-439e-8d51-60054a95ed11/control-plane-machine-set-operator/0.log" Feb 18 12:56:54 crc kubenswrapper[4717]: I0218 12:56:54.232115 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kzbdk_baa2972a-fc13-4b3b-bf4b-9dceaf35db41/kube-rbac-proxy/0.log" Feb 18 12:56:54 crc kubenswrapper[4717]: I0218 12:56:54.242931 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kzbdk_baa2972a-fc13-4b3b-bf4b-9dceaf35db41/machine-api-operator/0.log" Feb 18 12:56:58 crc kubenswrapper[4717]: I0218 12:56:58.546128 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:58 crc kubenswrapper[4717]: I0218 12:56:58.600368 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:56:58 crc kubenswrapper[4717]: I0218 12:56:58.793209 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z85mc"] Feb 18 12:56:59 crc kubenswrapper[4717]: I0218 12:56:59.983183 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z85mc" podUID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerName="registry-server" containerID="cri-o://9108cc889a76aeee42459258443454c5ca7b3b9dbc796020f383f6a9cd89ba29" gracePeriod=2 Feb 18 12:57:00 crc kubenswrapper[4717]: I0218 12:57:00.036845 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:57:00 crc kubenswrapper[4717]: E0218 12:57:00.037473 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:57:00 crc kubenswrapper[4717]: I0218 12:57:00.993895 4717 generic.go:334] "Generic (PLEG): container finished" podID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerID="9108cc889a76aeee42459258443454c5ca7b3b9dbc796020f383f6a9cd89ba29" exitCode=0 Feb 18 12:57:00 crc kubenswrapper[4717]: I0218 12:57:00.993932 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z85mc" event={"ID":"6d5c173e-5ec2-44a0-b6df-a61172801628","Type":"ContainerDied","Data":"9108cc889a76aeee42459258443454c5ca7b3b9dbc796020f383f6a9cd89ba29"} Feb 18 12:57:00 crc kubenswrapper[4717]: I0218 12:57:00.994306 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z85mc" event={"ID":"6d5c173e-5ec2-44a0-b6df-a61172801628","Type":"ContainerDied","Data":"e75f66cb607cea361357bf74f9911c3047710f283aa38b48aa946a15e3cbed93"} Feb 18 12:57:00 crc kubenswrapper[4717]: I0218 12:57:00.994326 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e75f66cb607cea361357bf74f9911c3047710f283aa38b48aa946a15e3cbed93" Feb 18 12:57:01 crc kubenswrapper[4717]: I0218 12:57:01.092312 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:57:01 crc kubenswrapper[4717]: I0218 12:57:01.245545 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-utilities\") pod \"6d5c173e-5ec2-44a0-b6df-a61172801628\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " Feb 18 12:57:01 crc kubenswrapper[4717]: I0218 12:57:01.245608 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-catalog-content\") pod \"6d5c173e-5ec2-44a0-b6df-a61172801628\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " Feb 18 12:57:01 crc kubenswrapper[4717]: I0218 12:57:01.245655 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kvrl\" (UniqueName: \"kubernetes.io/projected/6d5c173e-5ec2-44a0-b6df-a61172801628-kube-api-access-9kvrl\") pod \"6d5c173e-5ec2-44a0-b6df-a61172801628\" (UID: \"6d5c173e-5ec2-44a0-b6df-a61172801628\") " Feb 18 12:57:01 crc kubenswrapper[4717]: I0218 12:57:01.247220 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-utilities" (OuterVolumeSpecName: "utilities") pod "6d5c173e-5ec2-44a0-b6df-a61172801628" (UID: "6d5c173e-5ec2-44a0-b6df-a61172801628"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:57:01 crc kubenswrapper[4717]: I0218 12:57:01.265784 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5c173e-5ec2-44a0-b6df-a61172801628-kube-api-access-9kvrl" (OuterVolumeSpecName: "kube-api-access-9kvrl") pod "6d5c173e-5ec2-44a0-b6df-a61172801628" (UID: "6d5c173e-5ec2-44a0-b6df-a61172801628"). InnerVolumeSpecName "kube-api-access-9kvrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:57:01 crc kubenswrapper[4717]: I0218 12:57:01.348453 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:57:01 crc kubenswrapper[4717]: I0218 12:57:01.348500 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kvrl\" (UniqueName: \"kubernetes.io/projected/6d5c173e-5ec2-44a0-b6df-a61172801628-kube-api-access-9kvrl\") on node \"crc\" DevicePath \"\"" Feb 18 12:57:01 crc kubenswrapper[4717]: I0218 12:57:01.379185 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d5c173e-5ec2-44a0-b6df-a61172801628" (UID: "6d5c173e-5ec2-44a0-b6df-a61172801628"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:57:01 crc kubenswrapper[4717]: I0218 12:57:01.451323 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5c173e-5ec2-44a0-b6df-a61172801628-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:57:02 crc kubenswrapper[4717]: I0218 12:57:02.001924 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z85mc" Feb 18 12:57:02 crc kubenswrapper[4717]: I0218 12:57:02.038322 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z85mc"] Feb 18 12:57:02 crc kubenswrapper[4717]: I0218 12:57:02.048349 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z85mc"] Feb 18 12:57:03 crc kubenswrapper[4717]: I0218 12:57:03.048638 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5c173e-5ec2-44a0-b6df-a61172801628" path="/var/lib/kubelet/pods/6d5c173e-5ec2-44a0-b6df-a61172801628/volumes" Feb 18 12:57:08 crc kubenswrapper[4717]: I0218 12:57:08.658368 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mr9jc_d10e332f-4255-4315-bf68-1b479919ed9c/cert-manager-controller/0.log" Feb 18 12:57:08 crc kubenswrapper[4717]: I0218 12:57:08.894914 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-qx54w_70bc2303-bab2-48bc-a4a3-4c19b86571aa/cert-manager-cainjector/0.log" Feb 18 12:57:08 crc kubenswrapper[4717]: I0218 12:57:08.924998 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8dvlf_efe0486e-8153-4083-aedf-15085839219b/cert-manager-webhook/0.log" Feb 18 12:57:13 crc kubenswrapper[4717]: I0218 12:57:13.036515 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:57:13 crc kubenswrapper[4717]: E0218 12:57:13.037370 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:57:24 crc kubenswrapper[4717]: I0218 12:57:24.106991 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-z9wv4_f80bcf06-9be6-4c29-9ed7-d575837ff0d6/nmstate-console-plugin/0.log" Feb 18 12:57:24 crc kubenswrapper[4717]: I0218 12:57:24.388276 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-nb8mk_83d51357-d0dc-4297-9449-a066463019f7/nmstate-metrics/0.log" Feb 18 12:57:24 crc kubenswrapper[4717]: I0218 12:57:24.410420 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-j27f6_58fee9d2-2e42-46e4-b5a2-8b8c80a52424/nmstate-handler/0.log" Feb 18 12:57:24 crc kubenswrapper[4717]: I0218 12:57:24.430119 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-nb8mk_83d51357-d0dc-4297-9449-a066463019f7/kube-rbac-proxy/0.log" Feb 18 12:57:24 crc kubenswrapper[4717]: I0218 12:57:24.586529 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-sj6bs_42edbbd9-e0db-4a1f-b9fc-c0987cae7f48/nmstate-operator/0.log" Feb 18 12:57:24 crc kubenswrapper[4717]: I0218 12:57:24.664563 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-4xf48_7f6c194d-4ed9-4ab8-af9c-bb9c44324b0d/nmstate-webhook/0.log" Feb 18 12:57:27 crc kubenswrapper[4717]: I0218 12:57:27.060682 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:57:27 crc kubenswrapper[4717]: E0218 12:57:27.061413 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:57:42 crc kubenswrapper[4717]: I0218 12:57:42.036549 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:57:42 crc kubenswrapper[4717]: E0218 12:57:42.037525 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:57:52 crc kubenswrapper[4717]: I0218 12:57:52.739054 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fbbnw_060557a2-52b7-4e87-908f-0ea8b0febb4c/kube-rbac-proxy/0.log" Feb 18 12:57:52 crc kubenswrapper[4717]: I0218 12:57:52.876832 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-fbbnw_060557a2-52b7-4e87-908f-0ea8b0febb4c/controller/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.085911 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-frr-files/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.222835 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-reloader/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.254801 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-metrics/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.283187 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-reloader/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.286524 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-frr-files/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.713356 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-reloader/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.716628 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-metrics/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.736884 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-metrics/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.739440 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-frr-files/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.910619 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-reloader/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.921535 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/controller/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.927926 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-metrics/0.log" Feb 18 12:57:53 crc kubenswrapper[4717]: I0218 12:57:53.928671 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/cp-frr-files/0.log" Feb 18 12:57:54 crc kubenswrapper[4717]: I0218 12:57:54.087452 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/frr-metrics/0.log" Feb 18 12:57:54 crc kubenswrapper[4717]: I0218 12:57:54.136786 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/kube-rbac-proxy/0.log" Feb 18 12:57:54 crc kubenswrapper[4717]: I0218 12:57:54.212099 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/kube-rbac-proxy-frr/0.log" Feb 18 12:57:54 crc kubenswrapper[4717]: I0218 12:57:54.344412 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/reloader/0.log" Feb 18 12:57:54 crc kubenswrapper[4717]: I0218 12:57:54.525221 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-xr4mg_3c7a04c5-e38e-41bf-9343-b567857783d6/frr-k8s-webhook-server/0.log" Feb 18 12:57:54 crc kubenswrapper[4717]: I0218 12:57:54.725287 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6ccf94b89b-k7n5s_4cc66c29-35b2-4c85-95d0-ad78febc48c8/manager/0.log" Feb 18 12:57:54 crc kubenswrapper[4717]: I0218 12:57:54.799923 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fb5446db6-w9jm7_d5bf9065-9c80-484d-9700-dc484f20a071/webhook-server/0.log" Feb 18 12:57:55 crc kubenswrapper[4717]: I0218 12:57:55.005382 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vfwxr_d668cbc3-c191-43fc-bb6f-64f4b7bdb969/kube-rbac-proxy/0.log" Feb 18 12:57:55 crc kubenswrapper[4717]: I0218 12:57:55.562666 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vfwxr_d668cbc3-c191-43fc-bb6f-64f4b7bdb969/speaker/0.log" Feb 18 12:57:55 crc kubenswrapper[4717]: I0218 12:57:55.813919 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hwtqk_74dfbdfa-ea21-46dd-8dac-c8aac0050e51/frr/0.log" Feb 18 12:57:56 crc kubenswrapper[4717]: I0218 12:57:56.037102 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:57:56 crc kubenswrapper[4717]: E0218 12:57:56.037406 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:58:08 crc kubenswrapper[4717]: I0218 12:58:08.036805 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:58:08 crc kubenswrapper[4717]: E0218 12:58:08.037676 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:58:08 crc kubenswrapper[4717]: I0218 12:58:08.676924 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/util/0.log" Feb 18 12:58:08 crc kubenswrapper[4717]: I0218 12:58:08.884664 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/util/0.log" Feb 18 12:58:08 crc kubenswrapper[4717]: I0218 12:58:08.905598 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/pull/0.log" Feb 18 12:58:08 crc kubenswrapper[4717]: I0218 12:58:08.905908 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/pull/0.log" Feb 18 12:58:09 crc kubenswrapper[4717]: I0218 12:58:09.077882 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/util/0.log" Feb 18 12:58:09 crc kubenswrapper[4717]: I0218 12:58:09.087078 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/extract/0.log" Feb 18 12:58:09 crc kubenswrapper[4717]: I0218 12:58:09.111344 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213xv7pd_3a791e8a-cfda-4171-8b8b-1828dcae5419/pull/0.log" Feb 18 12:58:09 crc kubenswrapper[4717]: I0218 12:58:09.268214 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-utilities/0.log" Feb 18 12:58:09 crc kubenswrapper[4717]: I0218 12:58:09.462896 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-content/0.log" Feb 18 12:58:09 crc kubenswrapper[4717]: I0218 12:58:09.466572 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-utilities/0.log" Feb 18 12:58:09 crc kubenswrapper[4717]: I0218 12:58:09.488053 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-content/0.log" Feb 18 12:58:09 crc kubenswrapper[4717]: I0218 12:58:09.648607 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-utilities/0.log" Feb 18 12:58:09 crc kubenswrapper[4717]: I0218 12:58:09.686137 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/extract-content/0.log" Feb 18 12:58:09 crc kubenswrapper[4717]: I0218 12:58:09.939542 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-utilities/0.log" Feb 18 12:58:10 crc kubenswrapper[4717]: I0218 12:58:10.152742 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-content/0.log" Feb 18 12:58:10 crc kubenswrapper[4717]: I0218 12:58:10.183282 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-utilities/0.log" Feb 18 12:58:10 crc kubenswrapper[4717]: I0218 12:58:10.183378 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-content/0.log" Feb 18 12:58:10 crc kubenswrapper[4717]: I0218 12:58:10.227694 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nzfwz_89193f23-0851-4c72-8fa7-bdefb5b47de9/registry-server/0.log" Feb 18 12:58:10 crc kubenswrapper[4717]: I0218 12:58:10.373452 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-content/0.log" Feb 18 12:58:10 crc kubenswrapper[4717]: I0218 12:58:10.401728 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/extract-utilities/0.log" Feb 18 12:58:10 crc kubenswrapper[4717]: I0218 12:58:10.610807 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/util/0.log" Feb 18 12:58:10 crc kubenswrapper[4717]: I0218 12:58:10.832734 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/util/0.log" Feb 18 12:58:10 crc kubenswrapper[4717]: I0218 12:58:10.869519 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/pull/0.log" Feb 18 12:58:10 crc kubenswrapper[4717]: I0218 12:58:10.927652 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/pull/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.131305 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rv6t6_253e019a-02ea-41f5-bf51-52340512ad50/registry-server/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.167441 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/pull/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.187397 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/util/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.201013 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav5czd_ce95cdae-125f-4394-8f29-8d718f8297c4/extract/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.384725 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-h4tq9_27aacb4e-b587-400b-a73b-d7d27d3e2bb6/marketplace-operator/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.463738 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-utilities/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.624464 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-utilities/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.630793 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-content/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.678981 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-content/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.852807 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-content/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.864601 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/extract-utilities/0.log" Feb 18 12:58:11 crc kubenswrapper[4717]: I0218 12:58:11.952998 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9qfgx_8986e8c3-7ce9-40ca-94dd-8258ee800dc3/registry-server/0.log" Feb 18 12:58:12 crc kubenswrapper[4717]: I0218 12:58:12.085545 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-utilities/0.log" Feb 18 12:58:12 crc kubenswrapper[4717]: I0218 12:58:12.274772 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-content/0.log" Feb 18 12:58:12 crc kubenswrapper[4717]: I0218 12:58:12.278689 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-content/0.log" Feb 18 12:58:12 crc kubenswrapper[4717]: I0218 12:58:12.285629 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-utilities/0.log" Feb 18 12:58:12 crc kubenswrapper[4717]: I0218 12:58:12.797438 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-utilities/0.log" Feb 18 12:58:12 crc kubenswrapper[4717]: I0218 12:58:12.822886 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/extract-content/0.log" Feb 18 12:58:13 crc kubenswrapper[4717]: I0218 12:58:13.011764 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gp2r2_6724ff6a-bb09-4f26-a225-c815a47da5fc/registry-server/0.log" Feb 18 12:58:19 crc kubenswrapper[4717]: I0218 12:58:19.036989 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:58:19 crc kubenswrapper[4717]: E0218 12:58:19.037835 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:58:30 crc kubenswrapper[4717]: I0218 12:58:30.037350 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:58:30 crc kubenswrapper[4717]: E0218 12:58:30.038062 4717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5wbk5_openshift-machine-config-operator(823580ef-975b-4298-955b-fb3c0b5fefc3)\"" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" Feb 18 12:58:45 crc kubenswrapper[4717]: I0218 12:58:45.036810 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 12:58:45 crc kubenswrapper[4717]: I0218 12:58:45.943380 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"4e1e883061a234b6789e4237399a5987905c9f9f0f9ff4bae020b2c326bc1a9a"} Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.277330 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sd4b7"] Feb 18 12:58:58 crc kubenswrapper[4717]: E0218 12:58:58.278560 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerName="registry-server" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.278580 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerName="registry-server" Feb 18 12:58:58 crc kubenswrapper[4717]: E0218 12:58:58.278601 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerName="extract-utilities" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.278610 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerName="extract-utilities" Feb 18 12:58:58 crc kubenswrapper[4717]: E0218 12:58:58.278629 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerName="extract-content" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.278637 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerName="extract-content" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.278926 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5c173e-5ec2-44a0-b6df-a61172801628" containerName="registry-server" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.280670 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.287943 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sd4b7"] Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.376010 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-utilities\") pod \"certified-operators-sd4b7\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.376071 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-catalog-content\") pod \"certified-operators-sd4b7\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.376577 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g8lc\" (UniqueName: \"kubernetes.io/projected/48e487af-9e5a-449d-8e5c-716e5899b470-kube-api-access-8g8lc\") pod \"certified-operators-sd4b7\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.479115 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g8lc\" (UniqueName: \"kubernetes.io/projected/48e487af-9e5a-449d-8e5c-716e5899b470-kube-api-access-8g8lc\") pod \"certified-operators-sd4b7\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.479208 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-utilities\") pod \"certified-operators-sd4b7\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.479235 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-catalog-content\") pod \"certified-operators-sd4b7\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.479947 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-catalog-content\") pod \"certified-operators-sd4b7\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.480100 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-utilities\") pod \"certified-operators-sd4b7\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.507778 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g8lc\" (UniqueName: \"kubernetes.io/projected/48e487af-9e5a-449d-8e5c-716e5899b470-kube-api-access-8g8lc\") pod \"certified-operators-sd4b7\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.612089 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:58:58 crc kubenswrapper[4717]: I0218 12:58:58.928763 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sd4b7"] Feb 18 12:58:59 crc kubenswrapper[4717]: I0218 12:58:59.123929 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd4b7" event={"ID":"48e487af-9e5a-449d-8e5c-716e5899b470","Type":"ContainerStarted","Data":"956fe4dac2c9c3652c04ec0039e787dbd18f9d51c5135f37d331640f2832c825"} Feb 18 12:59:00 crc kubenswrapper[4717]: I0218 12:59:00.137869 4717 generic.go:334] "Generic (PLEG): container finished" podID="48e487af-9e5a-449d-8e5c-716e5899b470" containerID="1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1" exitCode=0 Feb 18 12:59:00 crc kubenswrapper[4717]: I0218 12:59:00.138433 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd4b7" event={"ID":"48e487af-9e5a-449d-8e5c-716e5899b470","Type":"ContainerDied","Data":"1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1"} Feb 18 12:59:01 crc kubenswrapper[4717]: I0218 12:59:01.169927 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd4b7" event={"ID":"48e487af-9e5a-449d-8e5c-716e5899b470","Type":"ContainerStarted","Data":"2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d"} Feb 18 12:59:03 crc kubenswrapper[4717]: I0218 12:59:03.190975 4717 generic.go:334] "Generic (PLEG): container finished" podID="48e487af-9e5a-449d-8e5c-716e5899b470" containerID="2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d" exitCode=0 Feb 18 12:59:03 crc kubenswrapper[4717]: I0218 12:59:03.191219 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd4b7" event={"ID":"48e487af-9e5a-449d-8e5c-716e5899b470","Type":"ContainerDied","Data":"2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d"} Feb 18 12:59:04 crc kubenswrapper[4717]: I0218 12:59:04.225054 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd4b7" event={"ID":"48e487af-9e5a-449d-8e5c-716e5899b470","Type":"ContainerStarted","Data":"ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc"} Feb 18 12:59:04 crc kubenswrapper[4717]: I0218 12:59:04.256290 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sd4b7" podStartSLOduration=2.822927584 podStartE2EDuration="6.25624579s" podCreationTimestamp="2026-02-18 12:58:58 +0000 UTC" firstStartedPulling="2026-02-18 12:59:00.141934253 +0000 UTC m=+4174.544035569" lastFinishedPulling="2026-02-18 12:59:03.575252459 +0000 UTC m=+4177.977353775" observedRunningTime="2026-02-18 12:59:04.249149536 +0000 UTC m=+4178.651250852" watchObservedRunningTime="2026-02-18 12:59:04.25624579 +0000 UTC m=+4178.658347106" Feb 18 12:59:08 crc kubenswrapper[4717]: I0218 12:59:08.612567 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:59:08 crc kubenswrapper[4717]: I0218 12:59:08.613541 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:59:08 crc kubenswrapper[4717]: I0218 12:59:08.675902 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:59:09 crc kubenswrapper[4717]: I0218 12:59:09.876648 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:59:09 crc kubenswrapper[4717]: I0218 12:59:09.937634 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sd4b7"] Feb 18 12:59:11 crc kubenswrapper[4717]: I0218 12:59:11.307685 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sd4b7" podUID="48e487af-9e5a-449d-8e5c-716e5899b470" containerName="registry-server" containerID="cri-o://ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc" gracePeriod=2 Feb 18 12:59:12 crc kubenswrapper[4717]: I0218 12:59:12.822788 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:59:12 crc kubenswrapper[4717]: I0218 12:59:12.965892 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g8lc\" (UniqueName: \"kubernetes.io/projected/48e487af-9e5a-449d-8e5c-716e5899b470-kube-api-access-8g8lc\") pod \"48e487af-9e5a-449d-8e5c-716e5899b470\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " Feb 18 12:59:12 crc kubenswrapper[4717]: I0218 12:59:12.965949 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-utilities\") pod \"48e487af-9e5a-449d-8e5c-716e5899b470\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " Feb 18 12:59:12 crc kubenswrapper[4717]: I0218 12:59:12.966017 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-catalog-content\") pod \"48e487af-9e5a-449d-8e5c-716e5899b470\" (UID: \"48e487af-9e5a-449d-8e5c-716e5899b470\") " Feb 18 12:59:12 crc kubenswrapper[4717]: I0218 12:59:12.968021 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-utilities" (OuterVolumeSpecName: "utilities") pod "48e487af-9e5a-449d-8e5c-716e5899b470" (UID: "48e487af-9e5a-449d-8e5c-716e5899b470"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:59:12 crc kubenswrapper[4717]: I0218 12:59:12.972612 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e487af-9e5a-449d-8e5c-716e5899b470-kube-api-access-8g8lc" (OuterVolumeSpecName: "kube-api-access-8g8lc") pod "48e487af-9e5a-449d-8e5c-716e5899b470" (UID: "48e487af-9e5a-449d-8e5c-716e5899b470"). InnerVolumeSpecName "kube-api-access-8g8lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.019599 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48e487af-9e5a-449d-8e5c-716e5899b470" (UID: "48e487af-9e5a-449d-8e5c-716e5899b470"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.067584 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g8lc\" (UniqueName: \"kubernetes.io/projected/48e487af-9e5a-449d-8e5c-716e5899b470-kube-api-access-8g8lc\") on node \"crc\" DevicePath \"\"" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.067612 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.067624 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48e487af-9e5a-449d-8e5c-716e5899b470-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.326652 4717 generic.go:334] "Generic (PLEG): container finished" podID="48e487af-9e5a-449d-8e5c-716e5899b470" containerID="ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc" exitCode=0 Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.326705 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd4b7" event={"ID":"48e487af-9e5a-449d-8e5c-716e5899b470","Type":"ContainerDied","Data":"ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc"} Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.326736 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd4b7" event={"ID":"48e487af-9e5a-449d-8e5c-716e5899b470","Type":"ContainerDied","Data":"956fe4dac2c9c3652c04ec0039e787dbd18f9d51c5135f37d331640f2832c825"} Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.326757 4717 scope.go:117] "RemoveContainer" containerID="ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.326955 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sd4b7" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.362834 4717 scope.go:117] "RemoveContainer" containerID="2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.365832 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sd4b7"] Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.379384 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sd4b7"] Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.413631 4717 scope.go:117] "RemoveContainer" containerID="1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.469009 4717 scope.go:117] "RemoveContainer" containerID="ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc" Feb 18 12:59:13 crc kubenswrapper[4717]: E0218 12:59:13.469777 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc\": container with ID starting with ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc not found: ID does not exist" containerID="ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.469804 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc"} err="failed to get container status \"ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc\": rpc error: code = NotFound desc = could not find container \"ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc\": container with ID starting with ca8dfe14a42fee91805964536bdcceb5a338a4ddb056b749cb77393f535e7dfc not found: ID does not exist" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.469835 4717 scope.go:117] "RemoveContainer" containerID="2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d" Feb 18 12:59:13 crc kubenswrapper[4717]: E0218 12:59:13.470466 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d\": container with ID starting with 2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d not found: ID does not exist" containerID="2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.470512 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d"} err="failed to get container status \"2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d\": rpc error: code = NotFound desc = could not find container \"2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d\": container with ID starting with 2b22a1f56cf8982f0356884e5e335ce4fc9b4f8ae0031af0deefa41e5ef8065d not found: ID does not exist" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.470542 4717 scope.go:117] "RemoveContainer" containerID="1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1" Feb 18 12:59:13 crc kubenswrapper[4717]: E0218 12:59:13.470900 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1\": container with ID starting with 1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1 not found: ID does not exist" containerID="1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1" Feb 18 12:59:13 crc kubenswrapper[4717]: I0218 12:59:13.470956 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1"} err="failed to get container status \"1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1\": rpc error: code = NotFound desc = could not find container \"1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1\": container with ID starting with 1f03fbd969e139ddd1c3fe7ea3eccf6b21a450ec08d5d334fe5e3fd0fe03daa1 not found: ID does not exist" Feb 18 12:59:15 crc kubenswrapper[4717]: I0218 12:59:15.046725 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e487af-9e5a-449d-8e5c-716e5899b470" path="/var/lib/kubelet/pods/48e487af-9e5a-449d-8e5c-716e5899b470/volumes" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.190818 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8"] Feb 18 13:00:00 crc kubenswrapper[4717]: E0218 13:00:00.191940 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e487af-9e5a-449d-8e5c-716e5899b470" containerName="extract-utilities" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.191956 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e487af-9e5a-449d-8e5c-716e5899b470" containerName="extract-utilities" Feb 18 13:00:00 crc kubenswrapper[4717]: E0218 13:00:00.191970 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e487af-9e5a-449d-8e5c-716e5899b470" containerName="registry-server" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.191976 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e487af-9e5a-449d-8e5c-716e5899b470" containerName="registry-server" Feb 18 13:00:00 crc kubenswrapper[4717]: E0218 13:00:00.191989 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e487af-9e5a-449d-8e5c-716e5899b470" containerName="extract-content" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.191997 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e487af-9e5a-449d-8e5c-716e5899b470" containerName="extract-content" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.192221 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e487af-9e5a-449d-8e5c-716e5899b470" containerName="registry-server" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.193039 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.195315 4717 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.197415 4717 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.208733 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8"] Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.355504 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c4017a0-38fc-4544-8183-bd551148a492-secret-volume\") pod \"collect-profiles-29523660-pwmf8\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.355590 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c4017a0-38fc-4544-8183-bd551148a492-config-volume\") pod \"collect-profiles-29523660-pwmf8\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.355803 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78ch\" (UniqueName: \"kubernetes.io/projected/7c4017a0-38fc-4544-8183-bd551148a492-kube-api-access-h78ch\") pod \"collect-profiles-29523660-pwmf8\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.458434 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c4017a0-38fc-4544-8183-bd551148a492-secret-volume\") pod \"collect-profiles-29523660-pwmf8\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.458538 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c4017a0-38fc-4544-8183-bd551148a492-config-volume\") pod \"collect-profiles-29523660-pwmf8\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.458598 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h78ch\" (UniqueName: \"kubernetes.io/projected/7c4017a0-38fc-4544-8183-bd551148a492-kube-api-access-h78ch\") pod \"collect-profiles-29523660-pwmf8\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.460211 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c4017a0-38fc-4544-8183-bd551148a492-config-volume\") pod \"collect-profiles-29523660-pwmf8\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.478796 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c4017a0-38fc-4544-8183-bd551148a492-secret-volume\") pod \"collect-profiles-29523660-pwmf8\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.482023 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h78ch\" (UniqueName: \"kubernetes.io/projected/7c4017a0-38fc-4544-8183-bd551148a492-kube-api-access-h78ch\") pod \"collect-profiles-29523660-pwmf8\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.516360 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:00 crc kubenswrapper[4717]: I0218 13:00:00.957704 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8"] Feb 18 13:00:01 crc kubenswrapper[4717]: I0218 13:00:01.857867 4717 generic.go:334] "Generic (PLEG): container finished" podID="7c4017a0-38fc-4544-8183-bd551148a492" containerID="0a9511cc4419d5d2ca170e51e3cfe4531b31dedda7278e268aa9e974c9d403a6" exitCode=0 Feb 18 13:00:01 crc kubenswrapper[4717]: I0218 13:00:01.858136 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" event={"ID":"7c4017a0-38fc-4544-8183-bd551148a492","Type":"ContainerDied","Data":"0a9511cc4419d5d2ca170e51e3cfe4531b31dedda7278e268aa9e974c9d403a6"} Feb 18 13:00:01 crc kubenswrapper[4717]: I0218 13:00:01.858360 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" event={"ID":"7c4017a0-38fc-4544-8183-bd551148a492","Type":"ContainerStarted","Data":"a56f5d05ccf60dba86cc9ce47e26b75d7fb01709ed9ed44c388dd07532e28b6a"} Feb 18 13:00:02 crc kubenswrapper[4717]: I0218 13:00:02.872199 4717 generic.go:334] "Generic (PLEG): container finished" podID="da82de6a-fa31-4823-ba62-4211efc7efe1" containerID="988d5f33b9e8fdbca47bc5c736f222e682f08211dd169fcf0a086898eae0425e" exitCode=0 Feb 18 13:00:02 crc kubenswrapper[4717]: I0218 13:00:02.872303 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vsg65/must-gather-5j6qp" event={"ID":"da82de6a-fa31-4823-ba62-4211efc7efe1","Type":"ContainerDied","Data":"988d5f33b9e8fdbca47bc5c736f222e682f08211dd169fcf0a086898eae0425e"} Feb 18 13:00:02 crc kubenswrapper[4717]: I0218 13:00:02.875100 4717 scope.go:117] "RemoveContainer" containerID="988d5f33b9e8fdbca47bc5c736f222e682f08211dd169fcf0a086898eae0425e" Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.247951 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.344430 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c4017a0-38fc-4544-8183-bd551148a492-secret-volume\") pod \"7c4017a0-38fc-4544-8183-bd551148a492\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.344552 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h78ch\" (UniqueName: \"kubernetes.io/projected/7c4017a0-38fc-4544-8183-bd551148a492-kube-api-access-h78ch\") pod \"7c4017a0-38fc-4544-8183-bd551148a492\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.346069 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c4017a0-38fc-4544-8183-bd551148a492-config-volume\") pod \"7c4017a0-38fc-4544-8183-bd551148a492\" (UID: \"7c4017a0-38fc-4544-8183-bd551148a492\") " Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.347032 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c4017a0-38fc-4544-8183-bd551148a492-config-volume" (OuterVolumeSpecName: "config-volume") pod "7c4017a0-38fc-4544-8183-bd551148a492" (UID: "7c4017a0-38fc-4544-8183-bd551148a492"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.353018 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c4017a0-38fc-4544-8183-bd551148a492-kube-api-access-h78ch" (OuterVolumeSpecName: "kube-api-access-h78ch") pod "7c4017a0-38fc-4544-8183-bd551148a492" (UID: "7c4017a0-38fc-4544-8183-bd551148a492"). InnerVolumeSpecName "kube-api-access-h78ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.353609 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4017a0-38fc-4544-8183-bd551148a492-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7c4017a0-38fc-4544-8183-bd551148a492" (UID: "7c4017a0-38fc-4544-8183-bd551148a492"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.448178 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h78ch\" (UniqueName: \"kubernetes.io/projected/7c4017a0-38fc-4544-8183-bd551148a492-kube-api-access-h78ch\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.448584 4717 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c4017a0-38fc-4544-8183-bd551148a492-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.448760 4717 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c4017a0-38fc-4544-8183-bd551148a492-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.614166 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vsg65_must-gather-5j6qp_da82de6a-fa31-4823-ba62-4211efc7efe1/gather/0.log" Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.885200 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" event={"ID":"7c4017a0-38fc-4544-8183-bd551148a492","Type":"ContainerDied","Data":"a56f5d05ccf60dba86cc9ce47e26b75d7fb01709ed9ed44c388dd07532e28b6a"} Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.885734 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a56f5d05ccf60dba86cc9ce47e26b75d7fb01709ed9ed44c388dd07532e28b6a" Feb 18 13:00:03 crc kubenswrapper[4717]: I0218 13:00:03.885275 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523660-pwmf8" Feb 18 13:00:04 crc kubenswrapper[4717]: I0218 13:00:04.328167 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl"] Feb 18 13:00:04 crc kubenswrapper[4717]: I0218 13:00:04.339488 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523615-ng2xl"] Feb 18 13:00:05 crc kubenswrapper[4717]: I0218 13:00:05.053452 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb39d3e5-4f78-4359-9a18-f9241be6a618" path="/var/lib/kubelet/pods/fb39d3e5-4f78-4359-9a18-f9241be6a618/volumes" Feb 18 13:00:14 crc kubenswrapper[4717]: I0218 13:00:14.873990 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vsg65/must-gather-5j6qp"] Feb 18 13:00:14 crc kubenswrapper[4717]: I0218 13:00:14.874891 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vsg65/must-gather-5j6qp" podUID="da82de6a-fa31-4823-ba62-4211efc7efe1" containerName="copy" containerID="cri-o://8dde4249c34dcc7318277941bd1174a7a1e1a47987c9d0fc33372bc665d5faa3" gracePeriod=2 Feb 18 13:00:14 crc kubenswrapper[4717]: I0218 13:00:14.884787 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vsg65/must-gather-5j6qp"] Feb 18 13:00:15 crc kubenswrapper[4717]: I0218 13:00:15.007845 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vsg65_must-gather-5j6qp_da82de6a-fa31-4823-ba62-4211efc7efe1/copy/0.log" Feb 18 13:00:15 crc kubenswrapper[4717]: I0218 13:00:15.012542 4717 generic.go:334] "Generic (PLEG): container finished" podID="da82de6a-fa31-4823-ba62-4211efc7efe1" containerID="8dde4249c34dcc7318277941bd1174a7a1e1a47987c9d0fc33372bc665d5faa3" exitCode=143 Feb 18 13:00:15 crc kubenswrapper[4717]: I0218 13:00:15.379154 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vsg65_must-gather-5j6qp_da82de6a-fa31-4823-ba62-4211efc7efe1/copy/0.log" Feb 18 13:00:15 crc kubenswrapper[4717]: I0218 13:00:15.383469 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/must-gather-5j6qp" Feb 18 13:00:15 crc kubenswrapper[4717]: I0218 13:00:15.490446 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/da82de6a-fa31-4823-ba62-4211efc7efe1-must-gather-output\") pod \"da82de6a-fa31-4823-ba62-4211efc7efe1\" (UID: \"da82de6a-fa31-4823-ba62-4211efc7efe1\") " Feb 18 13:00:15 crc kubenswrapper[4717]: I0218 13:00:15.490527 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfdz\" (UniqueName: \"kubernetes.io/projected/da82de6a-fa31-4823-ba62-4211efc7efe1-kube-api-access-xxfdz\") pod \"da82de6a-fa31-4823-ba62-4211efc7efe1\" (UID: \"da82de6a-fa31-4823-ba62-4211efc7efe1\") " Feb 18 13:00:15 crc kubenswrapper[4717]: I0218 13:00:15.498799 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da82de6a-fa31-4823-ba62-4211efc7efe1-kube-api-access-xxfdz" (OuterVolumeSpecName: "kube-api-access-xxfdz") pod "da82de6a-fa31-4823-ba62-4211efc7efe1" (UID: "da82de6a-fa31-4823-ba62-4211efc7efe1"). InnerVolumeSpecName "kube-api-access-xxfdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:00:15 crc kubenswrapper[4717]: I0218 13:00:15.594115 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxfdz\" (UniqueName: \"kubernetes.io/projected/da82de6a-fa31-4823-ba62-4211efc7efe1-kube-api-access-xxfdz\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:15 crc kubenswrapper[4717]: I0218 13:00:15.662060 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da82de6a-fa31-4823-ba62-4211efc7efe1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "da82de6a-fa31-4823-ba62-4211efc7efe1" (UID: "da82de6a-fa31-4823-ba62-4211efc7efe1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:00:15 crc kubenswrapper[4717]: I0218 13:00:15.696383 4717 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/da82de6a-fa31-4823-ba62-4211efc7efe1-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 13:00:16 crc kubenswrapper[4717]: I0218 13:00:16.044958 4717 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vsg65_must-gather-5j6qp_da82de6a-fa31-4823-ba62-4211efc7efe1/copy/0.log" Feb 18 13:00:16 crc kubenswrapper[4717]: I0218 13:00:16.045930 4717 scope.go:117] "RemoveContainer" containerID="8dde4249c34dcc7318277941bd1174a7a1e1a47987c9d0fc33372bc665d5faa3" Feb 18 13:00:16 crc kubenswrapper[4717]: I0218 13:00:16.045971 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vsg65/must-gather-5j6qp" Feb 18 13:00:16 crc kubenswrapper[4717]: I0218 13:00:16.066723 4717 scope.go:117] "RemoveContainer" containerID="988d5f33b9e8fdbca47bc5c736f222e682f08211dd169fcf0a086898eae0425e" Feb 18 13:00:17 crc kubenswrapper[4717]: I0218 13:00:17.059169 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da82de6a-fa31-4823-ba62-4211efc7efe1" path="/var/lib/kubelet/pods/da82de6a-fa31-4823-ba62-4211efc7efe1/volumes" Feb 18 13:00:43 crc kubenswrapper[4717]: I0218 13:00:43.927092 4717 scope.go:117] "RemoveContainer" containerID="2739e11789e2ef369e13a55e3cf12ea83cb1b6146b001f36982e465a25d51f63" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.164064 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523661-dflf4"] Feb 18 13:01:00 crc kubenswrapper[4717]: E0218 13:01:00.165031 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da82de6a-fa31-4823-ba62-4211efc7efe1" containerName="copy" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.165045 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="da82de6a-fa31-4823-ba62-4211efc7efe1" containerName="copy" Feb 18 13:01:00 crc kubenswrapper[4717]: E0218 13:01:00.165061 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da82de6a-fa31-4823-ba62-4211efc7efe1" containerName="gather" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.165067 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="da82de6a-fa31-4823-ba62-4211efc7efe1" containerName="gather" Feb 18 13:01:00 crc kubenswrapper[4717]: E0218 13:01:00.165098 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4017a0-38fc-4544-8183-bd551148a492" containerName="collect-profiles" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.165104 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4017a0-38fc-4544-8183-bd551148a492" containerName="collect-profiles" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.165323 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="da82de6a-fa31-4823-ba62-4211efc7efe1" containerName="gather" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.165342 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c4017a0-38fc-4544-8183-bd551148a492" containerName="collect-profiles" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.165360 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="da82de6a-fa31-4823-ba62-4211efc7efe1" containerName="copy" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.166046 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.180427 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523661-dflf4"] Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.277496 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-combined-ca-bundle\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.277561 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-config-data\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.277597 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpjrn\" (UniqueName: \"kubernetes.io/projected/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-kube-api-access-dpjrn\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.277688 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-fernet-keys\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.379356 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-combined-ca-bundle\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.379422 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-config-data\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.379460 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpjrn\" (UniqueName: \"kubernetes.io/projected/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-kube-api-access-dpjrn\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.379543 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-fernet-keys\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.387244 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-config-data\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.388569 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-fernet-keys\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.394387 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-combined-ca-bundle\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.398182 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpjrn\" (UniqueName: \"kubernetes.io/projected/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-kube-api-access-dpjrn\") pod \"keystone-cron-29523661-dflf4\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.484742 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:00 crc kubenswrapper[4717]: I0218 13:01:00.944849 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523661-dflf4"] Feb 18 13:01:01 crc kubenswrapper[4717]: I0218 13:01:01.470910 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523661-dflf4" event={"ID":"7514d6eb-6da9-4ebc-b089-fea2aa0e292c","Type":"ContainerStarted","Data":"3cacf70b18315593b1973a7dd9d77425c4b5766a3a1dbc9a507516d73739e9e7"} Feb 18 13:01:01 crc kubenswrapper[4717]: I0218 13:01:01.471308 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523661-dflf4" event={"ID":"7514d6eb-6da9-4ebc-b089-fea2aa0e292c","Type":"ContainerStarted","Data":"4251d938dffe38b63be43fa329908fbc6c8a5cb14072d410148c207635aeb156"} Feb 18 13:01:01 crc kubenswrapper[4717]: I0218 13:01:01.496878 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523661-dflf4" podStartSLOduration=1.496849195 podStartE2EDuration="1.496849195s" podCreationTimestamp="2026-02-18 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 13:01:01.490535654 +0000 UTC m=+4295.892636970" watchObservedRunningTime="2026-02-18 13:01:01.496849195 +0000 UTC m=+4295.898950511" Feb 18 13:01:04 crc kubenswrapper[4717]: I0218 13:01:04.498913 4717 generic.go:334] "Generic (PLEG): container finished" podID="7514d6eb-6da9-4ebc-b089-fea2aa0e292c" containerID="3cacf70b18315593b1973a7dd9d77425c4b5766a3a1dbc9a507516d73739e9e7" exitCode=0 Feb 18 13:01:04 crc kubenswrapper[4717]: I0218 13:01:04.498986 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523661-dflf4" event={"ID":"7514d6eb-6da9-4ebc-b089-fea2aa0e292c","Type":"ContainerDied","Data":"3cacf70b18315593b1973a7dd9d77425c4b5766a3a1dbc9a507516d73739e9e7"} Feb 18 13:01:05 crc kubenswrapper[4717]: I0218 13:01:05.931585 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.011426 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-fernet-keys\") pod \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.011509 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpjrn\" (UniqueName: \"kubernetes.io/projected/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-kube-api-access-dpjrn\") pod \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.011648 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-combined-ca-bundle\") pod \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.011719 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-config-data\") pod \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\" (UID: \"7514d6eb-6da9-4ebc-b089-fea2aa0e292c\") " Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.029739 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-kube-api-access-dpjrn" (OuterVolumeSpecName: "kube-api-access-dpjrn") pod "7514d6eb-6da9-4ebc-b089-fea2aa0e292c" (UID: "7514d6eb-6da9-4ebc-b089-fea2aa0e292c"). InnerVolumeSpecName "kube-api-access-dpjrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.031149 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7514d6eb-6da9-4ebc-b089-fea2aa0e292c" (UID: "7514d6eb-6da9-4ebc-b089-fea2aa0e292c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.050046 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7514d6eb-6da9-4ebc-b089-fea2aa0e292c" (UID: "7514d6eb-6da9-4ebc-b089-fea2aa0e292c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.072932 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-config-data" (OuterVolumeSpecName: "config-data") pod "7514d6eb-6da9-4ebc-b089-fea2aa0e292c" (UID: "7514d6eb-6da9-4ebc-b089-fea2aa0e292c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.114731 4717 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.114777 4717 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.114788 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpjrn\" (UniqueName: \"kubernetes.io/projected/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-kube-api-access-dpjrn\") on node \"crc\" DevicePath \"\"" Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.114800 4717 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7514d6eb-6da9-4ebc-b089-fea2aa0e292c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.518807 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523661-dflf4" event={"ID":"7514d6eb-6da9-4ebc-b089-fea2aa0e292c","Type":"ContainerDied","Data":"4251d938dffe38b63be43fa329908fbc6c8a5cb14072d410148c207635aeb156"} Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.519091 4717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4251d938dffe38b63be43fa329908fbc6c8a5cb14072d410148c207635aeb156" Feb 18 13:01:06 crc kubenswrapper[4717]: I0218 13:01:06.518877 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523661-dflf4" Feb 18 13:01:12 crc kubenswrapper[4717]: I0218 13:01:12.772820 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 13:01:12 crc kubenswrapper[4717]: I0218 13:01:12.773443 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 13:01:42 crc kubenswrapper[4717]: I0218 13:01:42.773545 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 13:01:42 crc kubenswrapper[4717]: I0218 13:01:42.774313 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 13:01:44 crc kubenswrapper[4717]: I0218 13:01:44.030175 4717 scope.go:117] "RemoveContainer" containerID="94c6f94e24af1ab125c42c98503f671927a5aa33bb34f924fcbc207d2e9f08ef" Feb 18 13:02:12 crc kubenswrapper[4717]: I0218 13:02:12.772452 4717 patch_prober.go:28] interesting pod/machine-config-daemon-5wbk5 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 13:02:12 crc kubenswrapper[4717]: I0218 13:02:12.773025 4717 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 13:02:12 crc kubenswrapper[4717]: I0218 13:02:12.773099 4717 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" Feb 18 13:02:12 crc kubenswrapper[4717]: I0218 13:02:12.774063 4717 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e1e883061a234b6789e4237399a5987905c9f9f0f9ff4bae020b2c326bc1a9a"} pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 13:02:12 crc kubenswrapper[4717]: I0218 13:02:12.774140 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" podUID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerName="machine-config-daemon" containerID="cri-o://4e1e883061a234b6789e4237399a5987905c9f9f0f9ff4bae020b2c326bc1a9a" gracePeriod=600 Feb 18 13:02:13 crc kubenswrapper[4717]: I0218 13:02:13.117148 4717 generic.go:334] "Generic (PLEG): container finished" podID="823580ef-975b-4298-955b-fb3c0b5fefc3" containerID="4e1e883061a234b6789e4237399a5987905c9f9f0f9ff4bae020b2c326bc1a9a" exitCode=0 Feb 18 13:02:13 crc kubenswrapper[4717]: I0218 13:02:13.117659 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerDied","Data":"4e1e883061a234b6789e4237399a5987905c9f9f0f9ff4bae020b2c326bc1a9a"} Feb 18 13:02:13 crc kubenswrapper[4717]: I0218 13:02:13.117700 4717 scope.go:117] "RemoveContainer" containerID="a9069744d0e8b9dda48da0a38aeb9c7acf97833cf3c68a10a2ddd20e74933b8a" Feb 18 13:02:14 crc kubenswrapper[4717]: I0218 13:02:14.131393 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5wbk5" event={"ID":"823580ef-975b-4298-955b-fb3c0b5fefc3","Type":"ContainerStarted","Data":"f73f3e6bc7bbfe3aec7ef55ec2bb5d90f95441c2c8d8c5aba7a723fc8787538d"} Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.735623 4717 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4p64n"] Feb 18 13:02:19 crc kubenswrapper[4717]: E0218 13:02:19.736726 4717 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7514d6eb-6da9-4ebc-b089-fea2aa0e292c" containerName="keystone-cron" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.736744 4717 state_mem.go:107] "Deleted CPUSet assignment" podUID="7514d6eb-6da9-4ebc-b089-fea2aa0e292c" containerName="keystone-cron" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.736954 4717 memory_manager.go:354] "RemoveStaleState removing state" podUID="7514d6eb-6da9-4ebc-b089-fea2aa0e292c" containerName="keystone-cron" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.738709 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.749492 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4p64n"] Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.875371 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-catalog-content\") pod \"redhat-marketplace-4p64n\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.875433 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw4f7\" (UniqueName: \"kubernetes.io/projected/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-kube-api-access-rw4f7\") pod \"redhat-marketplace-4p64n\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.875492 4717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-utilities\") pod \"redhat-marketplace-4p64n\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.977819 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-catalog-content\") pod \"redhat-marketplace-4p64n\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.977911 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw4f7\" (UniqueName: \"kubernetes.io/projected/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-kube-api-access-rw4f7\") pod \"redhat-marketplace-4p64n\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.978031 4717 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-utilities\") pod \"redhat-marketplace-4p64n\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.978500 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-catalog-content\") pod \"redhat-marketplace-4p64n\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.978651 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-utilities\") pod \"redhat-marketplace-4p64n\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:19 crc kubenswrapper[4717]: I0218 13:02:19.997596 4717 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw4f7\" (UniqueName: \"kubernetes.io/projected/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-kube-api-access-rw4f7\") pod \"redhat-marketplace-4p64n\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:20 crc kubenswrapper[4717]: I0218 13:02:20.056241 4717 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:20 crc kubenswrapper[4717]: I0218 13:02:20.527087 4717 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4p64n"] Feb 18 13:02:21 crc kubenswrapper[4717]: I0218 13:02:21.207114 4717 generic.go:334] "Generic (PLEG): container finished" podID="a7edf56e-cf34-4f28-a3db-bc0d015fb38e" containerID="80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3" exitCode=0 Feb 18 13:02:21 crc kubenswrapper[4717]: I0218 13:02:21.207176 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p64n" event={"ID":"a7edf56e-cf34-4f28-a3db-bc0d015fb38e","Type":"ContainerDied","Data":"80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3"} Feb 18 13:02:21 crc kubenswrapper[4717]: I0218 13:02:21.207548 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p64n" event={"ID":"a7edf56e-cf34-4f28-a3db-bc0d015fb38e","Type":"ContainerStarted","Data":"846143ae2e3677deb26211713a0bc9a82ed9c6e8591cff2337373222b1d096b9"} Feb 18 13:02:21 crc kubenswrapper[4717]: I0218 13:02:21.209861 4717 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 13:02:23 crc kubenswrapper[4717]: I0218 13:02:23.229911 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p64n" event={"ID":"a7edf56e-cf34-4f28-a3db-bc0d015fb38e","Type":"ContainerStarted","Data":"ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608"} Feb 18 13:02:24 crc kubenswrapper[4717]: I0218 13:02:24.241849 4717 generic.go:334] "Generic (PLEG): container finished" podID="a7edf56e-cf34-4f28-a3db-bc0d015fb38e" containerID="ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608" exitCode=0 Feb 18 13:02:24 crc kubenswrapper[4717]: I0218 13:02:24.242035 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p64n" event={"ID":"a7edf56e-cf34-4f28-a3db-bc0d015fb38e","Type":"ContainerDied","Data":"ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608"} Feb 18 13:02:26 crc kubenswrapper[4717]: I0218 13:02:26.271465 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p64n" event={"ID":"a7edf56e-cf34-4f28-a3db-bc0d015fb38e","Type":"ContainerStarted","Data":"8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd"} Feb 18 13:02:26 crc kubenswrapper[4717]: I0218 13:02:26.295567 4717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4p64n" podStartSLOduration=3.816258959 podStartE2EDuration="7.295549035s" podCreationTimestamp="2026-02-18 13:02:19 +0000 UTC" firstStartedPulling="2026-02-18 13:02:21.209556694 +0000 UTC m=+4375.611658010" lastFinishedPulling="2026-02-18 13:02:24.68884676 +0000 UTC m=+4379.090948086" observedRunningTime="2026-02-18 13:02:26.289204053 +0000 UTC m=+4380.691305369" watchObservedRunningTime="2026-02-18 13:02:26.295549035 +0000 UTC m=+4380.697650341" Feb 18 13:02:30 crc kubenswrapper[4717]: I0218 13:02:30.057439 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:30 crc kubenswrapper[4717]: I0218 13:02:30.059388 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:30 crc kubenswrapper[4717]: I0218 13:02:30.105524 4717 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:30 crc kubenswrapper[4717]: I0218 13:02:30.361225 4717 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:31 crc kubenswrapper[4717]: I0218 13:02:31.750581 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4p64n"] Feb 18 13:02:33 crc kubenswrapper[4717]: I0218 13:02:33.337146 4717 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4p64n" podUID="a7edf56e-cf34-4f28-a3db-bc0d015fb38e" containerName="registry-server" containerID="cri-o://8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd" gracePeriod=2 Feb 18 13:02:33 crc kubenswrapper[4717]: I0218 13:02:33.895636 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:33 crc kubenswrapper[4717]: I0218 13:02:33.991891 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-catalog-content\") pod \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " Feb 18 13:02:33 crc kubenswrapper[4717]: I0218 13:02:33.992443 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw4f7\" (UniqueName: \"kubernetes.io/projected/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-kube-api-access-rw4f7\") pod \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " Feb 18 13:02:33 crc kubenswrapper[4717]: I0218 13:02:33.992604 4717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-utilities\") pod \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\" (UID: \"a7edf56e-cf34-4f28-a3db-bc0d015fb38e\") " Feb 18 13:02:33 crc kubenswrapper[4717]: I0218 13:02:33.993348 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-utilities" (OuterVolumeSpecName: "utilities") pod "a7edf56e-cf34-4f28-a3db-bc0d015fb38e" (UID: "a7edf56e-cf34-4f28-a3db-bc0d015fb38e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.001731 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-kube-api-access-rw4f7" (OuterVolumeSpecName: "kube-api-access-rw4f7") pod "a7edf56e-cf34-4f28-a3db-bc0d015fb38e" (UID: "a7edf56e-cf34-4f28-a3db-bc0d015fb38e"). InnerVolumeSpecName "kube-api-access-rw4f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.019930 4717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7edf56e-cf34-4f28-a3db-bc0d015fb38e" (UID: "a7edf56e-cf34-4f28-a3db-bc0d015fb38e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.096100 4717 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.096146 4717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw4f7\" (UniqueName: \"kubernetes.io/projected/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-kube-api-access-rw4f7\") on node \"crc\" DevicePath \"\"" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.096161 4717 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7edf56e-cf34-4f28-a3db-bc0d015fb38e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.348154 4717 generic.go:334] "Generic (PLEG): container finished" podID="a7edf56e-cf34-4f28-a3db-bc0d015fb38e" containerID="8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd" exitCode=0 Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.348208 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p64n" event={"ID":"a7edf56e-cf34-4f28-a3db-bc0d015fb38e","Type":"ContainerDied","Data":"8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd"} Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.348228 4717 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4p64n" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.348241 4717 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p64n" event={"ID":"a7edf56e-cf34-4f28-a3db-bc0d015fb38e","Type":"ContainerDied","Data":"846143ae2e3677deb26211713a0bc9a82ed9c6e8591cff2337373222b1d096b9"} Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.348275 4717 scope.go:117] "RemoveContainer" containerID="8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.381847 4717 scope.go:117] "RemoveContainer" containerID="ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.426064 4717 scope.go:117] "RemoveContainer" containerID="80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.429974 4717 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4p64n"] Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.441648 4717 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4p64n"] Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.461965 4717 scope.go:117] "RemoveContainer" containerID="8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd" Feb 18 13:02:34 crc kubenswrapper[4717]: E0218 13:02:34.462545 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd\": container with ID starting with 8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd not found: ID does not exist" containerID="8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.462595 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd"} err="failed to get container status \"8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd\": rpc error: code = NotFound desc = could not find container \"8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd\": container with ID starting with 8a2b9bb30bc671bd5d0518568cf7095e21ab05fe62d98be6bf075e44325979dd not found: ID does not exist" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.462626 4717 scope.go:117] "RemoveContainer" containerID="ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608" Feb 18 13:02:34 crc kubenswrapper[4717]: E0218 13:02:34.462941 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608\": container with ID starting with ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608 not found: ID does not exist" containerID="ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.462983 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608"} err="failed to get container status \"ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608\": rpc error: code = NotFound desc = could not find container \"ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608\": container with ID starting with ab0a9681fbf7a0202560e7dedf166c722635f7559c82206e9e2e9a5f2be7a608 not found: ID does not exist" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.463014 4717 scope.go:117] "RemoveContainer" containerID="80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3" Feb 18 13:02:34 crc kubenswrapper[4717]: E0218 13:02:34.463241 4717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3\": container with ID starting with 80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3 not found: ID does not exist" containerID="80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3" Feb 18 13:02:34 crc kubenswrapper[4717]: I0218 13:02:34.463294 4717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3"} err="failed to get container status \"80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3\": rpc error: code = NotFound desc = could not find container \"80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3\": container with ID starting with 80ce81cec0e30b619cb15e6fbeefa4c9d23bf5d86e3bf981de11b89e6aa28ed3 not found: ID does not exist" Feb 18 13:02:35 crc kubenswrapper[4717]: I0218 13:02:35.051410 4717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7edf56e-cf34-4f28-a3db-bc0d015fb38e" path="/var/lib/kubelet/pods/a7edf56e-cf34-4f28-a3db-bc0d015fb38e/volumes" Feb 18 13:02:44 crc kubenswrapper[4717]: I0218 13:02:44.078176 4717 scope.go:117] "RemoveContainer" containerID="1ee51cb80a1d628c66c2626408cf21e7d8a1ab4461f70a4ac87d5374f2873845" Feb 18 13:02:44 crc kubenswrapper[4717]: I0218 13:02:44.101204 4717 scope.go:117] "RemoveContainer" containerID="0b719b635fdade88f988b5f64c0db0bf91613aa0c2a6e94808599b7b8be997cf" Feb 18 13:02:44 crc kubenswrapper[4717]: I0218 13:02:44.161853 4717 scope.go:117] "RemoveContainer" containerID="9108cc889a76aeee42459258443454c5ca7b3b9dbc796020f383f6a9cd89ba29"